Skip to content. Skip to main navigation.

Colloquiums and Invited Talks

Past Talks

Robust Object Re-Identification in Large Repository for Mobile Visual Search
Friday, December 01, 2017
Zhu Li
University of Missouri

Read More

Hide

Abstract: Mobile visual search has many important applications in surveillance, security, virtual and augmented reality and e-commerce. A central technical problem to enable these application is the visual object identification against a very large repository. Robust local features that are invariant to the image formation process, good aggregation and compression schemes for the local features that offers indexing efficiency and matching accuracy, are the focus of the recent MPEG standardization effort on Compact Descriptor for Visual Search (CDVS). In this talk, I will review the key technical challenges to the CDVS pipeline, and covering the novel contributions made in the CDVS work on alternative interesting points detection, more efficient interesting points aggregation scheme, indexing / hashing issues for object re-identification against large repository, and retrieval system optimization, as well as the future directions of the research in this area with new depth sensors and video inputs.

Biography: Zhu Li is an associated professor with the Dept of CSEE, University of Missouri, Kansas City, USA, also serving as ad hoc co-chair for the MPEG Point Cloud Compression group. He received his PhD from Electrical & Computer Engineering from Northwestern University in 2004, and was AFRL Summer Faculty at the US Air Force Academy, 2016 (Cyber Warfare Center) and 2017 (UAV Research Center), Sr. Staff Researcher/Sr. Manager with Samsung Research America's Multimedia Core Standards Research Lab in Dallas, from 2012-2015, Sr.Staff Researcher with FutureWei(Huawei)'s Media Lab in Bridgewater, NJ, from 2010-2012, Assistant Professor with the Dept of Computing, The Hong Kong Polytechnic University from 2008 to 2010, and a Principal Staff Research Engineer with the Multimedia Research Lab (MRL), Motorola Labs, Schaumburg, Illinois, from 2000 to 2008. His research interests include image/video analysis, compression, and communication and associated optimization and machine learning tools. He has 30+ issued or pending patents, 100+ publications in book chapters, journals, conference proceedings and standards contributions in these areas. He is an IEEE senior member, elected member of the IEEE Multimedia Signal Processing (MMSP) Technical Committee ,2014-17, 2017-20, elected Steering Chair (2016-18) of the IEEE Multimedia Communication Technical Committee (MMTC), elected member of the IEEE Circuits & System Society Multimedia Systems & Application (MSA) Tech Committee. He is an Associated Editor for IEEE Trans. On Multimedia (2015~), IEEE Trans. on Circuits & System for Video Technology (2016~), Springer Journal on Signal Processing Systems (2015~), co-editor for the Springer-Verlag book on "Intelligent Video Communication: Techniques and Applications". He is general co-chair for IEEE VCIP 2017, Special Session co-Chair for IEEE ICME 2017, he served on numerous conference and workshop TPCs and was area chair for IEEE ICIP 2015, 2016, 2017, ICME 2015, 2016, and symposium co-chair at IEEE ICC'2010, and IEEE Globecom 2017. He served on the Best Paper Award Committee for IEEE ICME 2010. He received a Best Paper Award from IEEE Int'l Conf on Multimedia & Expo (ICME) at Toronto, 2006, and a Best Paper Award from IEEE Int'l Conf on Image Processing (ICIP) at San Antonio, 2007. He is an APSIPA Distinguished Lecturer 2017. web: http://l.web.umkc.edu/lizhu email: lizhu@umkc.edu.

Leave no Trace: Location Data with Provable Privacy Guarantees
Wednesday, November 08, 2017
Xi He
Duke University

Read More

Hide

Abstract: Companies such as Google or Lyft collect a substantial amount of location data about their users to provide useful services. The release of these datasets for general use can enable numerous innovative applications and research. However, such data contains sensitive information about the users, and simple clocking-based techniques have been shown to be ineffective to ensure users’ privacy. These privacy concerns have motivated many leading technology companies and researchers to develop algorithms that collect and analyze location data with formal provable privacy guarantees. Despite these efforts, there is no unified framework which can (a) enhance a better understanding about the many existing provable privacy guarantees for location data; (b) allow flexible trade-offs between privacy, accuracy, and performance, based on the application’s requirements; (c) handle advanced settings involving complex queries or datasets. In this talk, I will present our ongoing work in addressing the challenges above, and discuss research directions for provable privacy guarantees.

Biography: Xi He is a Ph.D. student at Computer Science Department, Duke University. Her research interests lie in privacy-preserving data analysis and security. She has also received an M.S from Duke University and a double degree in Applied Mathematics and Computer Science from the University of Singapore. Xi has been working with Prof. Machanavajjhala on privacy since 2012. She has published in SIGMOD, VLDB, and CCS, and has given tutorials on privacy at VLDB 2016 and SIGMOD 2017. She received best demo award on differential privacy at VLDB 2016 and was awarded a 2017 Google Ph.D. Fellowship in Privacy and Security.

From Resource Disaggregation to Cooperative Memory Expansion in Networked Computing Systems
Wednesday, October 18, 2017
Dr. Nian-Feng Tzeng
University of Louisiana at Lafayette

Read More

Hide

Abstract: Networked computing systems usually consist of commodity servers, which are configured with specific amounts of resources in CPU cores, DRAM, and storage. When executing diverse applications with varying resource requirements, such a system is often hard to best meet the resource needs of a given application dynamically during execution, making hardware resources hosted in each server being oversubscribed or overprovisioned. To this end, resource disaggregation has been proposed to address imbalanced resource requirements and to promise flexible scaling of individual resource types independently on-demand. Typical resource disaggregation establishes separate pools of computing resources, but it calls for extensive resource type-specific hardware and software knowledge. We have pursued resource disaggregation in an efficient and lightweight fashion, focusing on cooperative memory expansion (COMEX) in networked computing systems. COMEX establishes immense physical memory collectively across networked nodes on-demand, drastically accelerating application runs. Its functionality is seamlessly integrated with the Linux page-frame reclaiming function to exploit kernel information for superior page-in/out support while avoiding excessive overhead. A testbed with 12 servers networked by RDMA-enabled switch gear and with COMEX support has been deployed for evaluation. Evaluation results under ten benchmarks from two suites reveal that COMEX can achieve speedups exceeding 97× over its native OS counterpart.

Biography: https://people.cmix.louisiana.edu/tzeng/images/tzeng.jpgText Box: Nian-Feng Tzeng is Lockheed Martin Professor with the Center for Advanced Computer Studies, School of Computing and Informatics, University of Louisiana at Lafayette, where he joined in 1987. He has served as Associate Director of School of Computing and Informatics since 2016. His current research interest is in the areas of high-performance computer systems, computer communications and networks, dependable computing and networked systems. He had been on the editorial board of the IEEE Transactions on Parallel and Distributed Systems, 1998 - 2001, and on the editorial board of the IEEE Transactions on Computers, 1994 - 1998. He was the Chair of Technical Committee on Distributed Processing, the IEEE Computer Society, 1999 - 2001. As IEEE Fellow, Dr. Tzeng is the recipient of the outstanding paper award at 10th International Conference on Distributed Computing Systems, May 1990. He received the University Foundation Distinguished Professor Award in 1997.

Can We Eliminate Synthesis From A Programmers Development Path?
Monday, October 09, 2017
Dr. David Andrews
University of Arkansas

Read More

Hide

Abstract: Reconfigurable manycore chips are our semiconductor industries next solution to provide more energy efficient scalable architectures for data centers and warehouse scale computers. A recent report from the United States Bureau of Labor Statistics showed that only 83,000 computer hardware engineers are employed within the United States. This is compared with the 1.3 Million software programmers. These statistics show that the number of hardware designers is insufficient to handle the potential scale of FPGAs deployed throughout future data centers and warehouse scale computers. Thus it has now become imperative that we develop a realistic pathway for programmers, not just hardware engineers, to compile custom high-performance circuits into the reconfigurable manycores that will populate our data centers and warehouse scale computers. Current state of the art approaches to create circuits requires the complete accelerator functionality to be first defined within a vendors CAD tool, then synthesized. In this talk I will outline a new approach we are investigating that moves synthesis out of the programmers development path. Our approach includes the introduction of a platform independent interpreter language and a run time system that can just in time assemble hardware components within a new overlay. Experimental results will be presented showing how the approach allows compilation of accelerators on both single chip heterogeneous multiprocessor systems as well as a commercial reconfigurable cluster with 24 FPGAs.

Biography: David Andrews holds the Mullins Endowed Chair of Computer Engineering at the University of Arkansas. Dr. Andrews worked as a research scientist at General Electric's Electronics Laboratory and Advanced Technology Laboratories on parallel and distributed embedded real time systems. He has held faculty positions at the University of Arkansas and the University of Kansas. He has led research sponsored by DARPA, NSF and industry on parallel real time architectures, run time systems and middleware for hybrid CPU/FPGA MPSoCs. He received his PhD in Computer Science in 1992 from Syracuse University. He is a senior member of the IEEE.

Enabling Research in Business and Finance Analytics: An Australian Case Study
Thursday, June 29, 2017
Fethi Rabhi
University of New South Wales, Australia

Read More

Hide

Abstract: This seminar will be centered around financial data and its usefulness in research, teaching and industry applications. It particularly focuses on datasets managed by Sirca such as daily, intra-day, real time and news data. The seminar will then focus its attention to different software-related projects designed to process financial market data. It will demonstrate on a number of case studies how these tools can help finance researchers accelerate the process of financial market data analysis during their research studies. The seminar will also illustrate the value of both data and tools in a practical teaching environment.

Biography: Fethi Rabhi has completed a PhD in Computer Science at the University of Sheffield in 1990 and after holding several academic appointments is now a Professor at the School of Computer Science and Engineering at the University of New South Wales in Australia. His research interests are in Software Engineering, Design Methods, Service-Oriented Computing, High Performance Computing, Web Technologies and Applications in Business, Finance and Economics (E-Business, Financial Trading, Electronic Markets and Banking). He has over 200 refereed publications (including 3 books) and has also been leading several research projects in the UK and Australia with funding from both Industry and Research Councils. He has also been involved in the development of several commercial software products through collaborative grants. Over the last 15 years, he has held important positions in several major research initiatives including Program Manager in the Capital Markets Cooperative Research Centre (CMCRC), Research Leader in a large DEST Innovation Science Linkage grant, and lately Research Leader in the New Financial Services project which was part of Smarts Services Cooperative Research Centre. On the teaching side, he contributed to bridging the knowledge gap between Computer Science and Finance.

Energy Saving in Data Centers
Monday, June 12, 2017
Laxmi Bhuyan
University of California, Riverside

Read More

Hide

Abstract: U.S. data center electricity consumption grew by roughly 36% between 2005 and 2010 to about 77,728,000 MWh/year, or 12 billion U.S. dollars. Based on current estimates, they are projected to consume approximately 73 billion kWh in 2020. The last decade has also brought an explosive growth of delay-sensitive interactive services that have become an integral part of our lives and constituted an increasingly high portion of data center workloads. This talk presents our current research efforts to reduce energy consumption in data center servers for latency sensitive applications while satisfying the tail latency constraints. We apply both DVFS and CPU sleep states intelligently to save energy. We also introduce an approximation technique that can be applied to interactive applications, like web search, to further reduce the power consumption while maintaining satisfactory quality. Finally, our current research into power saving in the data center network (DCN) through aggregation techniques is also presented.

Biography: Laxmi N. Bhuyan is a distinguished professor in Computer Science and Engineering at the University of California, Riverside. He has published more than 250 research papers in the areas of multiprocessor architectures, high-performance computing, network packet processing, and performance evaluation. He is a Fellow of the IEEE, ACM, and AAAS. His brief research, Ph.D. graduates, and professional activities are described at http://www.cs.ucr.edu/bhuyan.

Microgrids; Formed for Reliability and Resilience, Operated for Economic Efficiency
Wednesday, March 08, 2017
Dr. Mohammad Khodayar
Southern Methodist University

Read More

Hide

Abstract: Reliability of energy supply in distribution networks is dependent on the availability of the distribution feeders. A fault or failure in the distribution line or feeder within the radial network will lead to demand curtailment and self-healing approaches including fault isolation, network reconfiguration by remote-controlled tie-switches, and load restoration procedures are employed to minimize the energy curtailment in the distribution networks. Among others, microgrids are the most viable technology to improve the reliability and resilience of energy supply in distribution networks. Microgrids are composed of distributed energy resources (DERs) and demands with distinct boundaries that are connected to the utility grid through the point of common coupling (PCC) or operated in island mode. Forming microgrids can help to improve the restoration capability of the distribution networks. As microgrids are deployed in low or medium voltage distribution networks, they would exchange energy with the main grid through an aggregator which represents the middle agent who interacts with the microgrid and the wholesale market. Fitting microgrids as prosumers in the economic operation of the bulk power system is crucial to capture the economic benefits of this technology. This presentation addresses multiple challenges with microgrids' operation and planning including forming the microgrids for providing heterogeneous reliability of service in the distribution networks, reinforcing the microgrid's components to improve the resilience of energy supply once exposed to deliberate disruptions and competition among microgrids to provide energy services and improve the economics of the distribution and bulk-power networks.

Biography: Mohammad Khodayar received the B.Sc. degree from Amirkabir University of Technology, Tehran, Iran; the M.S. degree from Sharif University of Technology, Tehran; and the Ph.D. degree from the Illinois Institute of Technology, Chicago, IL, USA, in 2012, all in electrical engineering. He was a Senior Research Associate with the Robert W. Galvin Center for Electricity Innovation, Illinois Institute of Technology. He is currently an Assistant Professor in the Department of Electrical Engineering, Southern Methodist University, Dallas, TX, USA and an associate editor of the IEEE Transactions on Sustainable Energy. He is the author of over 40 peer-reviewed journal and conference publications. He is the guest editor for the special section on "Optimization techniques in renewable energy system planning, design, operation, and control", IEEE Transactions on Sustainable Energy. His research area is power system operation and planning, microgrids, and large-scale stochastic optimization.

Microsoft Azure Service Fabric
Wednesday, February 22, 2017
Dr. Rishi Sinha
University of Texas at Arlington

Read More

Hide

Abstract: Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices. Service Fabric also addresses the significant challenges in developing and managing cloud applications. Developers and administrators can avoid complex infrastructure problems and focus on implementing mission-critical, demanding workloads that are scalable, reliable, and manageable. Service Fabric represents the next-generation middleware platform for building and managing these enterprise-class, tier-1, cloud-scale applications.

Biography: Rishi R. Sinha is a Principal Engineering Manager in the Azure division at Microsoft Corporation. Rishi completed his PhD in Computer Science at the University of Illinois at Urbana-Champaign in 2007, where he worked in the Database and Information Systems Lab with Prof. Marianne Winslett. Rishi completed his MS at Illinois in 2004, earned a BS from Stony Brook University in 2002. At Microsoft, Rishi manage a group of exceptional Software Engineers working on the Microsoft Azure Service Fabric team. The technology being delivered by the Microsoft Azure Service Fabric team provides cutting edge capabilities to develop, deploy and manage large scale stateful Micro Services. The Microsoft Azure Service Fabric forms the backbone of core Azure Services, as well as the Microsoft Azure Stack being shipped as part of Windows Server providing public vs private cloud symmetry that very few other products offer. My team is working on various core components of Azure Service Fabric as well is responsible for ensuring internal developer productivity is high.?

Roles of Spatio-temporal Data Mining in Geo-informatics Applications
Tuesday, December 06, 2016
Kulsawasd Jitkajornwanich, PhD
King Mongkut's Institute of Technology Ladkrabang

Read More

Hide

Abstract: In this talk, two different research projects on roles of spatio-temporal data mining in geo-informatics applications in Thailand are discussed: a development of HF radar predictive system and a landscape metric-based road extraction from satellite imagery.

HF (high frequency) radar system is used to capture the surface current behavior--in terms of velocity and direction--in the ocean near the coast. 18 HF coastal radar stations were implemented along the Gulf of Thailand in order to monitor for disaster situations (e.g., Tsunami) as well as relevant risks. Several other applications can also benefit from this near-real time coastal radar data such as oil-spill backtracking, water quality management, and marine navigation. In these applications, however, the functionality is limited to the recent (non-forecast) datasets. Applications like search and rescue system, or hazardous materials spill trajectory prediction, which require forecast data, were not applicable. Therefore, in this work, we propose a model to predict future surface currents based on historical coastal radar datasets by utilizing spatial/temporal data mining.

Road extraction is a common task in GIS and often used as a basis for many location-based applications in various domains. Despite several methods proposed, accompanied with different performance criteria, the focus was mainly on extracting main roads w.r.t. some benchmark datasets (accuracy). Minor (or local) roads, however, were left with less attention even though they are equally important as the main roads, esp. in developing countries where local roads are sometimes more often used as a shortcut or part of the designated route. In this work, we aim to improve the completeness aspect of the result set (main and local roads) by adopting an ecology concept, called landscape metric.

Biography: Kulsawasd Jitkajornwanich received his B.Sc.(Hons.) degree in Computer Science from Chulalongkorn University in 2004. Supported by the Royal Thai Government Scholarships, he pursued his graduate studies in 2007 and received his M.S. and Ph.D. degrees in Computer Science from the University of Texas at Arlington in 2009 and 2014, respectively. During his doctoral studies, he worked on a collaboration project between UT Arlington and NOAA-WGRFC/TRWD to develop a flood forecasting system for North Texas. From 2014 to 2016, he worked as a researcher at GISTDA, a government agency in the Ministry of Science and Technology of Thailand. He currently works as a Lecturer at the Department of Computer Science, King Mongkut's Institute of Technology Ladkrabang. His research areas are in big spatial data analytics, distributed computing/storage frameworks, and spatio-temporal databases and mining.?

Ranking verification counterexamples: An invariant guided approach
Tuesday, July 05, 2016
Ansuman Banerjee, PhD
Advanced Computing and Microelectronics Unit
Indian Statistical Institute Kolkata

Read More

Hide

Abstract: Unit testing and verification constitute an important step in the validation life cycle of large and complex multi-component designs. Many unit validation methods often suffer from the problem of false negatives, when they analyze a component in isolation and look for errors. It often turns out that some of the reported unit failures are infeasible, i.e. the valuations of the component input parameters that trigger the failure scenarios, though feasible on the unit in isolation, cannot occur in practice considering the integrated design, in which the unit-under-test is instantiated. In this work, we consider this problem in the context of a multi-component RTL design, with a set of unit failures reported on a specific unit. We present an automated two-stage failure scenario classification and prioritization strategy that can filter out false negatives and cluster them accordingly. The use of classical AI and program analysis techniques in conjunction with formal verification helps in developing new frameworks for reasoning and deduction, which appear promising for a wide variety of problems. In particular, we discuss the results of using this technique in the context of a few RTL benchmarks.

Biography: Ansuman Banerjee is currently serving as an Associate Professor at the Advanced Computing and Microelectronics Unit, Indian Statistical Institute Kolkata. His research interests include design automation for embedded systems, hardware?software verification, VLSI CAD, and automata theory. Ansuman received his Ph.D. from IIT Kharagpur. Prior to joining ISI, he served as a postdoctoral researcher in the Computer Science department at National University of Singapore, and worked for Interra Systems India Pvt. Ltd., where he worked as part of the synthesis and verification team.

Cumulon: Simplifying Matrix-Based Data Analytics in the Cloud
Wednesday, April 20, 2016
Jun Yang, PhD
Duke University

Read More

Hide

Abstract: Cumulon is a system aimed at simplifying the development and deployment of statistical analysis of big data in public clouds. Cumulon allows users to program in their familiar language of matrices and linear algebra, without worrying about how to map data and computation to specific hardware and cloud software platforms. Given requirements in terms of time, monetary cost, and risk tolerance, Cumulon automatically makes intelligent provisioning, configuration, and execution decisions---from the type and number of machines to acquire, to the choice of blocking factors for matrix multiply. For clouds with auction-based markets, where the cost and availability of computing resources vary according to market conditions, Cumulon helps users decide how to bid for such resources and how to cope with market volatility. In this talk, I will share our experience in building Cumulon, including the alternatives explored and the lessons learned.

Biography: Jun Yang is a Professor of Computer Science at Duke University, where he has been teaching since receiving his Ph.D. from Stanford University in 2001. He is broadly interested in databases and data-intensive systems. He is a recipient of the NSF CAREER Award, IBM Faculty Award, HP Labs Innovation Research Award, and Google Faculty Research Award. He also received the David and Janet Vaughan Brooks Teaching Award at Duke. His current research interests lie in making data analysis easier and more scalable for scientists, statisticians, and journalists.

Data-Driven 3D Modeling
Thursday, April 07, 2016
Enrique Dunn, PhD
University of North Carolina Chapel Hill

Read More

Hide

Abstract: The pervasive generation and public dissemination of visual media offer an ever-increasing datum of observed environments and events. The sheer amount of imagery and the wide-ranging diversity of recorded content renders the analysis of said data a challenging technical task. The visual inference of geometric and semantic concepts from capture imagery is still an open research problem being vigorously researched both in academia as in industry. Large-scale visual 3D modeling provides a framework to both integrate such heterogeneous imagery into a common reference and synthesize rich environmental representations. In many regards, image-based geometry estimation has achieved a level of maturity that renders it as a "deployable technology" in consumer electronics (e.g. Microsoft's Kinect Sensor) and enterprise-level data services (e.g. Google/Apple/Bing Maps). However, the continuous influx of video and image data (available in public archival and crowd-sourced repositories, social media, and live feeds) opens a diverse set of new challenges and opportunities for the deployment of 3D modeling systems.

The first challenge (and perhaps the most evident) is achieving computational scalability in the presence of Internet-scale unstructured imagery datasets. Second, exploiting the heterogeneous nature of the available data to enhance (rather than hinder) the attained environmental representations. Third, model the observed environmental dynamics from uncontrolled imagery.

In this talk, I will discuss recent research efforts aimed at augmenting the computational and conceptual scope of image-based 3D modeling in the context of crowd-sourced visual data. More specifically, I will present solutions to the aforementioned challenges by addressing the problems of large-scale data association, enhanced model fidelity through multi-source visual media integration, and spatiotemporal structure modeling from ad-hoc and unsynchronized video capture. The relevance of these technologies to fields such as remote sensing, robotics, virtual reality and human computer interaction will also be discussed.

Biography: Enrique Dunn is a research assistant professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He is part of the 3D Computer Vision Group, carrying out research on the geometric and semantic relationships among a 3D scene and a depicting image set. Dr. Dunn earned a master's degree in Computer Science in 2001 and a doctorate in Electronics and Telecommunications in 2006, both from the Ensenada Center for Scientific Research and Higher Education (Mexico). During his doctorate studies, Dr. Dunn carried out research while visiting the French Institute for Research in Computer Science and Control (INRIA) in Rocquencourt. He joined the Department of Computer Science as a visiting scholar in 2008 after being awarded a one-year Postdoctoral Fellowship for Studies Abroad by Mexico's National Council for Science and Technology. He remained with UNC's CS Department as a postdoctoral researcher until becoming a research assistant professor in 2012. Dr. Dunn has authored over 40 papers in international conferences and journals. He is a member of the Editorial Board of Elsevier Journal of Image and Vision Computing. His current research interests include large-scale crowd-sourced image analysis, structure from motion, dense 3D modeling, active vision systems, and evolutionary computation.

Pipelined Symbolic Taint Analysis on Multi-core Architectures
Thursday, March 31, 2016
Jiang Ming
Pennsylvania State University

Read More

Hide

Abstract: The multifaceted benefits of taint analysis have led to its wide adoption in security tasks, such as software attack detection, data lifetime analysis, and reverse engineering. However, the high runtime overhead imposed by dynamic taint analysis has severely limited its adoption in production systems. The slowdown incurred by conventional dynamic taint analysis tools can easily go beyond 30X times. One way to improve performance is to parallelize taint analysis. Existing work has dramatically speeded up the analysis but has encountered a bottleneck. A key obstacle to effective parallelization is the strict coupling of program execution and taint tracking logic code. In this talk, I will present TaintPipe, a novel technique for parallelizing taint analysis in a pipeline style to take advantage of ubiquitous multi-core platforms. With the developed techniques, TaintPipe is able to significantly improve the performance of taint analysis and advance the state of the art, enabling broader adoption of information tracking technology. In addition, I will briefly introduce my research on formal program semantics-based methods for obfuscated binary code analysis and outline the future work ahead.

Biography: Jiang Ming is currently a Ph.D. candidate in the College of Information Sciences and Technology of Pennsylvania State University, where he is a member of the Software Systems Security Research Lab. His research focuses on security, especially software security and malware defense, including secure information flow analysis, software plagiarism detection, malicious binary code analysis, and software analysis for security issues. Jiang Ming has extensive academic and industry experience in computer security. His work has been published in prestigious security and software engineering conferences (USENIX Security, CCS, Euro S&P, and FSE). He is among the first to work on symbolic execution based methods for semantics-based binary code diffing. More recently he has been working on the design of efficient and obfuscation-resilient binary code analysis techniques.

Self-Collusion Resistant Auctions for Heterogeneous Secondary Spectrum Markets
Monday, March 28, 2016
Wei Li
George Washington University

Read More

Hide

Abstract: Spectrum auctions have been proposed as powerful market-based spectrum management techniques in recent years, to improve the channel utilization while benefiting both the primary and the secondary users in secondary spectrum markets. The major design goal of these auction schemes focuses on truthfulness to prevent market manipulation, by ensuring that no buyer/seller can obtain a larger utility via cheating on its bid price. However, self-collusion, a more insidious cheating behavior, can successfully break the truthfulness of the three most popular schemes adopted by secondary spectrum auctions, namely McAfee, Myerson's Optimal Mechanism (MOM), and Vickrey-Clarke-Groves (VCG). Unfortunately, the existence of self-collusion in MOM and VCG has never been reported in literature. In this talk, we mainly talk about our research results in countering the self-collusion problem in MOM and VCG based spectrum auctions. Particularly, we will present the root causes of the self-collusion phenomenon and introduce our novel self-collusion resistant auction schemes that can simultaneously achieve important economic properties, such as truthfulness and individual-rationality. To the best of our knowledge, we are the first to investigate self-collusion in MOM and VCG for secondary spectrum markets.

Biography: Wei (Lisa) Li is a Ph.D. candidate at the Department of Computer Science, The George Washington University (GWU), with a Ph.D. degree expected in May 2016. Her current research interest spans the areas of secure and privacy-aware computing, and secure and truthful auctions in dynamic spectrum access. She has completed more than 20 authored/coauthored papers, and most of them were published or accepted for publication in premier journals and conferences such as IEEE/ACM Transactions on Networking, IEEE JSAC, IEEE Transactions on Wireless Communications, ACM MobiHoc, and IEEE INFOCOM. She is the winner of two best student paper awards with one from ACM Workshop CRAB'2013 and one from WASA'2010. She also received Louis P. Wegman Endowment Fellowship in 2014, and School of Engineering and Applied Science 125th Anniversary Endowment Fellowship in 2013 from GWU. Besides, she is a student member of IEEE and a member of IEEE Communications Society.

Cross-Domain Cyber-Physical Systems for Smart Cities: Addressing Mobility Challenges by Urban Systems with Urban Data
Monday, March 21, 2016
Desheng Zhang
University of Minnesota

Read More

Hide

Abstract: For the first time ever, we have more people living in urban areas than rural areas. Based on this inevitable urbanization, my research aims to address sustainability challenges related to urban mobility (e.g., energy consumption and traffic congestion) by data-driven applications with a Cyber-Physical-Systems approach (CPS, also known as a broader term for Internet of Things), which is a new information paradigm integrating communication, computation and control in real time. Under the context of a smart cities initiative proposed by the White House, in this talk, I will focus on CPS related to large-scale cross-domain urban systems, e.g., taxi, bus, subway, cellphone and smart payment systems. I will first show how cross-domain data from these systems can be collaboratively utilized to capture urban mobility in real time by a new technique called multi-view bounding, which addresses overfitting issues of existing mobility models driven by single-domain data. Then I will show how the captured real-time mobility can be used to design a practical service, i.e., mobility-driven ridesharing, to provide positive feedback to urban systems themselves, e.g., reducing energy consumption and traffic congestion. Finally, I will present real-world impact of my research and some future work about CPS for smart cities.

Biography: Desheng Zhang is a Research Associate at Department of Computer Science and Engineering of the University of Minnesota. Previously, he was offered the Senseable City Consortium Postdoctoral Fellowship from MIT and awarded his Ph.D in Computer Science from University of Minnesota. His research is uniquely built upon 10TB urban data from 10 kinds of cross-domain urban systems, including cellphone, smartcard, taxi, bus, truck, subway, bike, personal vehicle, electric vehicle, and road networks in 8 cities across 3 continents with 100 million urban residents involved. Desheng designs and implements large-scale data-driven models and real-world services to address urban sustainability challenges. Desheng has published more than 20 papers, featuring 11 first-author papers in premium Computer Science venues, e.g., MobiCom, SenSys, IPSN, ICCPS, SIGSPATIAL, ICDCS, RTSS, BIGDATA and 6 best paper/thesis/poster awards. More Info: http://www.cs.umn.edu/~zhang/

Exploring next-generation technologies for near-threshold computing and high-performance computing systems.
Thursday, March 10, 2016
Xue Lin
University of Southern California

Read More

Hide

Abstract: Low-power embedded computing and high-performance computing are pervasive and important in various scales of applications, ranging from battery-powered embedded systems, handheld smartphones, desktop computers and household appliances, to data centers and grid-level applications. In my talk, I will discuss my work on near-threshold computing for low-power embedded systems with next-generation technologies. We investigate the characteristics of FinFET devices and circuits, and optimize the structure of FinFET circuits and systems under near-threshold computing. We propose a device-circuit-architecture cross-layer design framework, starting from accurate FinFET device modeling, logic and memory cell optimization, to performance and energy efficiency enhancement techniques.

In high-performance data centers, over-provisioning of energy storage devices (ESDs) provides new opportunities for performing power capping and capex/opex reduction without performance degradation. We propose the hierarchical ESD structure for data centers and the corresponding provisioning and control framework for design-time optimization and run-time control. I also work on future data center structure and propose the data-center-on-a-chip (DCoC) paradigm. We solve the virtual machine mapping problem in the DCoC paradigm to minimize the communication cost while satisfying chip power budget and power density constraints.

Biography: Xue Lin is a Ph.D. candidate in the Department of Electrical Engineering at the University of Southern California. Her advisor is Prof. Massoud Pedram. Her research interests are (i) near-threshold computing for low-power embedded systems, (ii) high-performance computing and mobile cloud computing systems, and (iii) machine learning and computing in (embedded) cyber-physical systems. Her research work has resulted in two Best Paper Awards, multiple Best Paper nominations, and one IEEE Trans. on CAD Popular Paper. Most of her conference papers are published in highly selective conference proceedings with 20% - 30% acceptance rate.

Tools And Techniques For Multiparty Computation
Monday, February 29, 2016
Samee Zahur
University of Virginia

Read More

Hide

Abstract: How can two strangers figure out how many phone contacts they have in common without revealing anything else about each other? Can this be done even in the absence of trusted third parties? How about the case of two strangers comparing genetic information to figure out how closely related they are? Many interesting applications have become possible with recent improvements in multiparty computation, and we are working to make it even more efficient and convenient to use.

In this talk, we will be focusing on random memory access in secure computation. In other words, we will try to efficiently solve the problem where a program needs to access a memory location without revealing which location is being accessed. The first part will be on specialized circuit structures that allow extremely efficient memory access for any circuit-based protocol (e.g. Yao, GMW), but only if the access pattern follows certain constraints. The second half of the talk will be a new Oblivious RAM construction that allows any arbitrary random access, but is less efficient. Although this problem had been ``solved'' in theory, past solutions only provide asymptotic benefits. They all had exorbitant initialization costs which dwarfed any per-access performance improvement they provided. Our construction provides a 100x improvement in initialization cost, and concrete benefits for as small as 144 bytes of data, inspite of being asymptotically inferior.

We hope this will make secure multiparty computation easier to adopt in a greater variety of applications than was reasonable in the past.

Biography: Samee Zahur is a PhD student advised by David Evans at the University of Virginia. He works mostly on secure computation protocols, and on making them practical. Previously, he also interned at MSR under the supervision of Bryan Parno. During this time he worked on the Geppetto project, which allowed results of computation outsourced to a powerful server to be verified by a weak client. He has also spent a summer internship at SRI International, where he worked under the supervision of Mariana Raykova.

Emerging Nonvolatile Memory Technology based Future Main Memory System
Thursday, February 25, 2016
Lei Jiang, PhD
AMD

Read More

Hide

Abstract: Main memory scaling is in great peril as cell size remains constant and power consumption rises at the latest technology generation for traditional memory technologies, such as dynamic random access memory (DRAM). Recent innovations have identified emerging nonvolatile memories, such as phase change memory (PCM), as scalable solutions to boost memory capacity in a power efficient manner. Multi-level cell (MLC) PCM storing multiple bits in a single cell further increases storage density with a lower cost per bit. However, to deploy MLC PCM as a DRAM alternative and to exploit its scalability, MLC PCM must be architected to overcome its own disadvantages such as long write latency, short cell endurance and large write power. In this talk, I first will present my first technique, write truncation, to reduce the number of write iterations and write latency through error correction codes. And then, I will describe my second technique, triple level cell (TLC), which reduces write power and prolongs memory lifetime by storing 2 bits into double TLCs. At last, in order to mitigate the large write problem on MLC PCMs, I will propose my last technique, RESET scheduling, which reduces the PCM chip peak power without prolonging the write latency. With my techniques, MLC PCM will become a practical and competitive candidate to implement future main memory system.

Biography: Lei Jiang received his BS and MS from Shanghai Jiao Tong University China in 2006 and 2009, respectively. Lei completed his PhD in the University of Pittsburgh, Jan, 2015. He is working at AMD. His research topic includes phase change memory, STT-MRAM and Memristor. He is the co-recipient of the best paper award of the International Symposium on Low Power Electronics and Design (ISLPED) in 2013.

Data Center Networks: Trends, Opportunities, and Challenges
Wednesday, February 17, 2016
Samee U. Khan, PhD
Department of Electrical and Computer Engineering
North Dakota State University

Read More

Hide

Abstract: The major Information and Communication Technology (ICT) components within a data center are: (a) servers, (b) storage, and (c) interconnection networks. The data center network (DCN) is considered the communication backbone of a data center; consequently, a DCN is one of the prime design concerns in the data center, which plays a pivotal role in ascertaining the performance factors and initial capital investment. However, the legacy DCN infrastructures inherently lack the capability to meet the current growth trend and bandwidth demands. The legacy DCN architectures mainly suffer from: (a) energy-inefficiency, (b) poor scalability, (c) high cost, (d) low cross-section bandwidth, and (e) non-agility.

In this talk, we will discuss a few legacy DCN architectures and elaborate on the current industrial standard DCN architectures, such as the fat-tree, DCell, VL2, BCube, flattened butterfly, and FiConn. We will discuss the current trends in DCN architectural research and elaborate on the challenges, such as (a) performance, (b) reliability, (c) fault tolerance, (d) high end-to-end bandwidth, and (e) agility that will quantify for several research opportunities.

Biography: Samee U. Khan received a BS degree in 1999 from Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi, Pakistan, and a PhD in 2007 from the University of Texas, Arlington, TX, USA. Currently, he is Associate Professor of Electrical and Computer Engineering at the North Dakota State University, Fargo, ND, USA. Prof. Khan's research interests include optimization, robustness, and security of: cloud, grid, cluster and big data computing, social networks, wired and wireless networks, power systems, smart grids, and optical networks. His work has appeared in over 300 publications. He is on the editorial boards of leading journals, such as IEEE Access, IEEE Cloud Computing, IEEE Communications Surveys and Tutorials, and IEEE IT Pro. He is a Fellow of the Institution of Engineering and Technology (IET, formerly IEE), and a Fellow of the British Computer Society (BCS). He is an ACM Distinguished Lecturer, a member of the ACM, and a Senior Member of the IEEE.

Violating Privacy and Providing Security by using Human Behavior
Thursday, November 19, 2015
Janne Lindqvist
Rutgers University

Read More

Hide

Abstract: In this talk, we will discuss two of our recent works on using the knowledge of human behavior for systems security and privacy. First, we discuss Elastic Pathing, an algorithm that can deduce your driving locations just based on a starting location and the speed of your driving. This is an important result because several insurance companies claim that their approach to "usage-based automotive insurance" is privacy-preserving when they collect only speed data. Our work shows that this is not the case. Second, we will discuss a robust approach to user authentication: user-generated free-form gestures. We show how people, without receiving any specific instructions, are able to generate both memorable and secure gestures.

Biography: Janne Lindqvist is an assistant professor of electrical and computer engineering and a member of WINLAB at Rutgers University. From 2011-2013, Janne was an assistant research professor at ECE/WINLAB at Rutgers. Prior to Rutgers, Janne was a post-doc with the Human-Computer Interaction Institute at Carnegie Mellon University's School of Computer Science. Janne received his M.Sc. degree in 2005, and D.Sc. degree in 2009, both in Computer Science and Engineering from Helsinki University of Technology, Finland. He works at the intersection of security engineering, human-computer interaction and mobile computing. Before joining academia, Janne co-founded a wireless networks company, Radionet, which was represented in 24 countries before being sold to Florida-based Airspan Networks in 2005. His work has been featured several times in IEEE Spectrum, MIT Technology Review, Scientific American, Communications of the ACM, Yahoo! News, NPR, WHYY Radio and recently also in CBS Radio News, Fortune, Computerworld, Der Spiegel, London Times, International Business Times, Slashdot, The Register, and over 300 other online venues and print media around the world. He has received the Best Paper Award from MobiCom'12 and the Best Paper Nominee Award from UbiComp'14.

Towards smarter non intrusive systems for independent living
Wednesday, November 18, 2015
Claudio Bettini
University of Milan

Read More

Hide

Abstract: In an ageing world population more citizens are at risk of losing their ability of independent living with consequences on quality of life and sustainability of related costs. Mobile and pervasive computing coupled with intelligent data processing and analysis can provide innovative methods and tools for supporting a better quality of life, as well as early and focused intervention. In this talk I will first briefly report on the work carried out at the EveryWare lab in the last years on innovative mobile applications for the elderly as well as for people with vision disabilities. I will then discuss in more detail the results of a project in collaboration with a hospital and a tele-medicine company on the use of pervasive systems installed in the elderly's home for supporting clinicians in the monitoring of cognitive decline. I will illustrate SmartFABER, a novel hybrid statistical and knowledge-based technique used to analyse sensor data and detect behavioral anomalies.

Biography: Claudio Bettini is full professor at the computer science department of the University of Milan, where he leads the EveryWare laboratory (http://everywarelab.di.unimi.it/). He received his PhD in Computer Science from the University of Milan in 1993. He has been post-doc at IBM Research, NY, and, for more than a decade, an affiliate research professor at the Center for Secure Information Systems at George Mason University, VA. His research interests cover the areas of mobile and pervasive computing, data privacy, temporal and spatial data management, and knowledge management. On these topics he has extensively published in the leading conferences and journals. He has been serving as PC Chair and General chair in the organisation of major events in the Mobile and Pervasive Computing areas. He has been associate editor of The VLDB Journal, the Journal of Pervasive and Mobile Computing, IEEE TKDE, PeerJ Computer Science Journal. In 2011 he founded EveryWare Technologies, a startup developing innovative mobile apps for the disabled and the elderly. He has been co-PI in three NSF funded projects on data privacy as well as PI and co-PI of several Italian national projects. He is a member of ACM SIGMOD and IEEE Computer Society.

High Performance Data Analytics
Monday, November 16, 2015
Howie Huang
Department of Electric and Engineering
George Washington University

Read More

Hide

Abstract: Recently President Obama ordered the establishment of National Strategic Computing Initiative (NSCI) to call for research and development of next-generation high-performance computing (HPC) systems. Going beyond traditional HPC areas, such systems will focus on efficient processing of vast amount of structured and unstructured data. In this talk, I will describe of our on-going efforts in developing strong computer systems support towards this goal, specifically two projects to be presented at the upcoming SC'15 (SuperComputing) conference.

First, the Breadth-First Search (BFS) algorithm serves as the foundation for many graph-processing applications and analytics workloads. We have designed and developed a new GPU-based BFS system that delivers unprecedented performance through efficient scheduling of a large number of GPU threads and effective utilization of GPU memory hierarchy. This system has won the top places in the recent rankings of Graph 500 and GreenGraph 500, delivering 122 billion TEPS (traversed edges per second) and 446 million TEPS per watt.

Second, large-scale cloud data centers leverage virtualization to achieve high resource utilization, scalability, and availability. Although ideally the performance of an application running inside a virtual machine (VM) shall be independent of co-located applications that share the physical resources, running big data applications presents a unique challenge in achieving optimal performance in such virtualized systems. We have built IOrchestra, a holistic collaborative virtualization framework, which bridges the semantic gaps of I/O stacks and system information across multiple VMs, improves virtual I/O performance through collaboration from guest domains, and increases resource utilization in data centers.

Biography: Dr. Howie Huang is an Associate Professor and Associate Chair/Director of Academic Programs in Department of Electrical and Computer Engineering at the George Washington University. His research interests are in the areas of computer systems and architecture, including cloud computing, big data, high-performance computing and storage systems. He was a recipient of the NSF CAREER Award in 2014, GWU School of Engineering and Applied Science Outstanding Young Researcher Award in 2014, Comcast Technology Research and Development Fund in 2015, NVIDIA Academic Partnership Award in 2011, and IBM Real Time Innovation Faculty Award in 2008. His projects won the Best Poster Award at PACT'11, ACM Undergraduate Student Research Competition at SC'12, and a finalist for the Best Student Paper Award at SC'11. He received his BS in the Department of Computer Science at Wuhan University in China and completed his Ph.D. in the Department of Computer Science at the University of Virginia. http://www.seas.gwu.edu/~howie/

Binary Code Analysis on OS Kernels: Techniques and Applications
Wednesday, September 09, 2015
Zhiqiang Lin
UT Dallas

Read More

Hide

Abstract: Being a basic means for the reverse engineering of program logic, binary code analysis has been used in many security applications such as malware analysis, vulnerability discovery, protocol reverse engineering, and forensic analysis. However, tons of efforts in binary code analysis have been focusing on analyzing the user level software with significant less attention on kernel binary.

In this talk, Dr. Lin would like to talk about a line of their recent efforts of how to use dynamic binary code analysis on OS kernels to solve a unique problem in virtualization, namely the semantic gap problem. This problem exists because at the hypervisor layer, the view is too low level, and there is no semantic abstractions such as files, APIs and system calls. Therefore, hypervisor layer programmers often have to manually bridge the semantic gap while developing virtual machine introspection software. Through dynamic binary code analysis, Dr. Lin will talk about how to automatically bridge the semantic gap with a number of program analysis techniques from the hypervisor layer, and demonstrate a set of new applications, such as using the native command for guest-OS introspection, and automated guest-OS management.

Biography: Dr. Zhiqiang Lin is an assistant professor at the University of Texas at Dallas. He received his PhD from the department of computer science at Purdue University. Dr. Lin's primary research interests are systems and software security, with an emphasis of developing program analysis techniques and applying them to secure the OS kernels as well as the running software. Dr. Lin is a recipient of the NSF CAREER award, the AFOSR Young Investigator award, and a VMware faculty research award.

Large Scale Analytics for Medical Applications
Tuesday, September 01, 2015
Dimitris Metaxas, PhD
Rutgers University

Read More

Hide

Abstract: Over the last 20 years, we have been developing a general, scalable, computational framework that combines principles of computational learning with sparse methods, mixed norms, learning, dictionaries, CNNs and deformable modeling methods. This framework has been used for resolution of complex large scale problems in medical image analysis. Our methods allow the discovery of complex features, shapes and learning-based analytics. We will present these methods and their applications to several medical applications which include feature discovery for segmentation and recognition of body parts, cardiac MRI image reconstruction and cardiac analytics including blood flow, large scale histopathological image analysis and retrieval, body-part recognition from images and body fat estimation.

Biography: Dr. Dimitris Metaxas is a Distinguished Professor and Chair of the Computer Science Department at Rutgers University. He is director of the Center for Computational Biomedicine, Imaging and Modeling (CBIM). From September 1992 to September 2001 he was a tenured faculty member in the Computer and Information Science Department of the University of Pennsylvania and Director of the VAST Lab. Prof. Metaxas received a Diploma in Electrical Engineering from the National Technical University of Athens Greece in 1986, an M.Sc. in Computer Science from the University of Maryland, College Park in 1988, and a Ph.D. in Computer Science from the University of Toronto in 1992. Dr. Metaxas has been conducting research towards the development of formal methods to advance medical imaging, computer vision, computer graphics, and understanding of multimodal aspects of human language. His research emphasizes the development of formal models in shape representation, deterministic and statistical object modeling and tracking, sparse learning methods for segmentation and restoration, and organ motion analysis. Dr. Metaxas has published over 400 research articles in these areas and has graduated 40 PhD students. The above research has been funded by NSF, NIH, ONR, AFOSR, DARPA, HSARPA and the ARO. Dr. Metaxas has received several best paper awards, and he has 7 patents. He was awarded a Fulbright Fellowship in 1986, is a recipient of an NSF Research Initiation and Career awards, an ONR YIP, and is a Fellow of the American Institute of Medical and Biological Engineers. He has been involved with the organization of several major conferences in vision and medical image analysis, including ICCV 2007, ICCV 2011, MICCAI 2008 and CVPR 2014.

Designing Human-in-the-Loop Systems for Surgical Training and Intervention
Friday, May 01, 2015
Ann Majewicz, PhD
University of Texas Dallas

Read More

Hide

Abstract: Human-controlled robotic systems can greatly improve healthcare by synthesizing information, sharing knowledge with the human operator, and assisting with the delivery of care. This talk will highlight projects related to new technology for surgical simulation and training, as well as a more in depth discussion of a novel teleoperated robotic system that enables complex needle-based medical procedures, currently not possible. The central element to this work is understanding how to integrate the human with the physical system in an intuitive and natural way and how to leverage the relative strengths between the human and mechatronic system to improve outcomes.

Biography: Ann Majewicz completed B.S. degrees in Mechanical Engineering and Electrical Engineering at the University of St. Thomas, the M.S.E. degree in Mechanical Engineering at Johns Hopkins University, and the Ph.D. degree in Mechanical Engineering at Stanford University. Dr. Majewicz joined the Department of Mechanical Engineering as an Assistant Professor in August 2014, where she directs the Human-Enabled Robotic Technology Laboratory. She holds at courtesy appointment in the Department of Surgery at UT Southwestern Medical Center. Her research interests focus on the interface between humans and robotic systems, with an emphasis on improving the delivery of surgical and interventional care, both for the patient and the provider.

The Synaisthisi Project: Using Multi-Agent Systems Technologies in Resource Allocation
Friday, April 10, 2015
Ioannis A. Vetsikas, PhD

Read More

Hide

Abstract: The proliferation of mobile smart devices and the Internet of Things (IoT) vision brings about a number of opportunities for new services and business models. Taking a cue from cloud computing, we develop the SYNAISTHISI platform. It promotes and enables the idea of ubiquitous sensing and intelligence. The actual purpose of the SYNAISTHISI project is to research and build a system that can seamlessly interface with heterogeneous components (devices or biological entities) that can offer sensing, processing and actuator capabilities and integrate them as reusable services that are managed and easily "weaved" into applications. The platform in essence constructs services from the "virtualized" components, and facilitates their exploitation in a systematic, scalable and potentially commercially viable manner.

To implement this platform, a number of multi-agent systems technologies are crucial. First, to create complex cyber-physical systems (CPS) on-demand from available services, we develop agent strategies and mechanisms that facilitate the allocation of services to specific CPS. More specifically, the customers submit blueprints of CPS to be created together with a budget and their agent bids for possible services (offered by service providers) that maximize their customers' expected profit. A mechanism with desirable properties (e.g. efficient, fair) is developed that matches these services to customers, as well as strategies for the customer agents that allow them to deal with uncertainties, such as failing services or realizing at run time that more services might be necessary than those invoked initially.

Second, for one of the SYNAISHISI pilots, pertaining to the smart grid, our goal is to incentivize the usage of renewable energy and to flatten the demand from traditional power plants (coal etc). To achieve this, we develop multi-agent based technologies for Demand Reduction (DR) and Demand Side Management (DSM). Compared to existing DR/DSM technologies though our approach is not centralized, but rather each actor (e.g. household and end user) optimizes their energy usage individually. To coordinate the process we developed mechanisms (giving pricing or social incentives through the appropriate gamification), so that the optimal action for users and their agents is to shift and/or reduce their demand to coincide with times when energy is more abundant and available from renewable energy sources. The benefits of this approach are increased privacy, and also that no central system imposes a decision to the user, but rather the user desires herself to take the actions that are good for the overall system.

Biography: Dr. Ioannis A. Vetsikas holds a PhD from Cornell University, USA (2005) on the topic of designing trading agents in complex systems in the presence of tradeoffs. His research interests lie in the areas of Distributed Artificial Intelligence, Multi-agent Systems and e-Commerce. He has published papers in the top conferences on AI and multi-agent systems. He has worked as senior researcher on the award-winning Aladdin project, which conducted research on intelligent autonomous agent and examined the properties of the resulting multi-agent systems. He is investigating applications of these techniques in a number of the areas, e.g. service procurement and electricity markets and optimization. His work draws upon decision and game theoretic optimization and examines both the design and development of agent strategies and algorithms, as well as the corresponding mechanisms that work in tandem with the strategies to achieve desired system-wide properties. He is actively involved in the trading agent community. His agents have won the International Trading Agent Competition (TAC) on several occasions. He served as general chair for TAC-10 and TAC-13 and is currently on the board of directors of the Association for Trading Agent Research.

Second talk Title: AMINESS: Analysis of Marine Information for Environmentally Safe Shipping
Friday, April 10, 2015
Ioannis A. Vetsikas, PhD

Read More

Hide

Abstract: The Aegean Sea represents an extreme example of marine safety risk waiting for a catastrophic event to happen. During the last few years the traffic of tankers from and to the Black Sea passing through its narrow straits has increased. Reducing the possibility of ship accidents in the Aegean Sea is important to all social, economic, environmental, and cultural sectors of Greece. And yet, there are no national level monitoring policies or pollution prevention and response mechanisms. Furthermore, legal limitations have made it hard so far to enforce shipping routes in the Aegean Sea, as numerous recently recorded IMO (International Marine Organization) violations testify. Even ships flying "flags of convenience" are allowed to formulate routes according to their best judgment, routes which in an area of especially high traffic intensity, very rarely follow a straight line. However, shipping companies and captains, in their shipping activities, may sometimes neglect financial risks that would derive from a possible accident, due to unsafe trajectories and local weather and sea conditions. These "stakeholders" would directly benefit from a monitoring system that may suggest optimal safe routes, while also delivering realtime alerts for their boats in case of an increased possibility of an accident. At the same time, such a monitoring system would serve as a valuable tool for policy development, by systematically recording and analyzing ship routes, meteorological and sea conditions, utilizing a direct connection to accident and near miss records.

Based on the above considerations, the goal of the AMINESS project is to promote shipping safety in the Aegean Sea though a web portal offering different levels of access to relevant stakeholders such as ship owners, policy makers, the scientific community and the general public. The portal will have three principle uses:

  • To suggest both vessel and environmentally optimal safe route planning for ships.
  • To produce alerts for ships in real time with respect to potential hazards associated to other ships, as a function of its location and planned route, its cargo and the meteorological/sea conditions.
  • To support policy recommendations, through analysis of historical data in short and long term periods that correlate safety with ship trajectories.

The project uses a range of historical and real-time spatiotemporal and marine data, including (a) information of vessel identity, position, speed and other relevant information such as ship cargo, e.g. from the AIS (Automatic Identification System), (b) online weather and sea forecasting data, such as wave magnitudes and wind directions and (c) geographical information indicating the position of sea, land and sea bed depth.

Biography: Dr. Ioannis A. Vetsikas holds a PhD from Cornell University, USA (2005) on the topic of designing trading agents in complex systems in the presence of tradeoffs. His research interests lie in the areas of Distributed Artificial Intelligence, Multi-agent Systems and e-Commerce. He has published papers in the top conferences on AI and multi-agent systems. He has worked as senior researcher on the award-winning Aladdin project, which conducted research on intelligent autonomous agent and examined the properties of the resulting multi-agent systems. He is investigating applications of these techniques in a number of the areas, e.g. service procurement and electricity markets and optimization. His work draws upon decision and game theoretic optimization and examines both the design and development of agent strategies and algorithms, as well as the corresponding mechanisms that work in tandem with the strategies to achieve desired system-wide properties. He is actively involved in the trading agent community. His agents have won the International Trading Agent Competition (TAC) on several occasions. He served as general chair for TAC-10 and TAC-13 and is currently on the board of directors of the Association for Trading Agent Research.

Threat Analysis in Online Social Network Systems
Friday, April 10, 2015
Hassan Takabi, PhD
University of North Texas

Read More

Hide

Abstract: Online social networks (OSNs) including Location-Based Social Networks (LBSNs) have experienced exponential growth in recent years. These OSNs have changed the way that users interact and offer attractive means of online social interactions and communications, but also raise privacy and security concerns.

In this talk, we investigate and analyze various security and privacy issues in the most popular online social network systems such as Facebook and Foursquare. More specifically, we talk about Identity Clone Attacks (ICAs), Friendship Identification and Inference (FII) attacks, and Venue Attacks. ICAs aim at creating fake identities for malicious purposes on OSNs. Such attacks severely affect the trust relationships a victim has built with other users if no active protection is applied. In an FII attack scenario, an adversary accumulates the initial attack relevant information based on the friend lists visible to him in a social network and utilizes this information to identify and infer a target's friends using a random walk based approach. In venue attacks, an attacker manipulate attributes related to venues to deceive users, compromise their privacy and destroy the reputation of the venues in an LBSN.

We first analyze and characterize behaviors of these attacks, formally define the attacks and present the attack steps, the attack algorithm and various attack schemes. We then study what makes such attacks successful and discuss potential defense approaches against these attacks. Finally, we present experimental results to demonstrate flexibility and effectiveness of the proposed approaches.

Biography: Hassan Takabi is an Assistant Professor in the Department of Computer Science and Engineering at University of North Texas. He is founder and director of the INformation Security and Privacy: Interdisciplinary Research and Education (INSPIRE) Lab and affiliated with the Center for Information and Computer Security (CICS), which is designated as National Center for Academic Excellence in Information Assurance Research (CAE-R) and Education (CAE-IAE). His research is focused on various aspects of cybersecurity including privacy and security of online social networks, cloud computing security, advanced access control models, insider threats, usable security and privacy. He has authored or co-authored more than 30 peer-reviewed articles and book chapters in those areas. He is a member of IEEE and ACM.

Data Journalism Today: Applications and Problems
Friday, March 27, 2015
Jon McClure & Daniel Lathrop
The Dallas Morning News

Read More

Hide

Abstract: Long before Nate Silver collected his first baseball card, journalists were exploiting technology, computation and advanced statistics to report the news. But with the advent of widespread electronic communication and record keeping, the open source software movement and the relocation of our core audience online, journalism's relationship with programming has changed. This talk will explore how journalism thrives in today's modern programming environment, and how newspapers are using code to bootstrap the news.

We will look at several examples of how The Dallas Morning News performs large and small scale data analyses, crafts engaging interactive content online and builds tools for the 21st century newsroom.

We will talk about how we participate in the open source community and how we hope to explore partnership opportunities with other programmers and academics for public service projects.

We will also talk about the kinds of problems we believe are on the near horizon as newsrooms lean down and must be smarter marshalling their investigative resources: problems like supplementing the shoe leather reporting model with intuition gained through machine learning applications. But we'll also talk about the practical problems we're immediately facing: a lack of simple journalistic productivity tools and limited resources to build them.

Biography:
Jon McClure is the News Applications Specialist with the Projects team at The Dallas Morning News. He is also a staff writer and contributes original reporting and investigations. Previously he was a database specialist at the National Institute for Computer Assisted Reporting and spent time at investigative magazine The Detail in Belfast, Northern Ireland, where his work appeared in the broadsheet the Irish Times and on BBC Radio Ulster.

Daniel Lathrop is the Projects Data Editor at The Dallas Morning News. He's won numerous national awards for data journalism and investigative reporting, including the Edgar A. Poe Award of the White House Correspondents Association.

Big Data Engineering at the National Center for Scientific Research (NCSR) - Demokritos
Friday, March 27, 2015
Vangelis Karkaletsis, PhD
NCSR "Demokritos"

Read More

Hide

Abstract: In this talk we will present our activities, addressing challenges that arise when applying data analytics to heterogeneous and large-scale data. Heterogeneity and transparent distribution are the focus of the SemaGrow project (http://semagrow.eu/), approached as federated querying that is transparently optimized and where semantic transformations are dynamically applied. The outcome is a stack of technologies that simplify both the inclusion of heterogeneous data sources to a federated end-point and the development of client applications for this end-point. The SemaGrow Stack will be integrated in the Big Data Aggregator that is developed in the recently started Big Data Europe project (http://www.big-data-europe.eu/). The Big Data Aggregator will be piloted on diverse and challenging use cases defined by domain experts across the board of data-intensive science and technology.

Biography: Dr. Vangelis Karkaletsis holds the position of Research Director at NCSR "Demokritos", is the head of the Software and Knowledge Engineering Laboratory (SKEL) of the Institute of Informatics and Telecommunications, and responsible for the Institute's educational activities. His research interests are in the areas of Language and Knowledge Engineering, as applied to content analysis, natural language interfaces, ontology engineering. He has extensive experience in the coordination and technical management of European and national projects. He is currently Technical/scientific manager of the FP7-ICT project NOMAD on web content analysis for e-government applications, the FP7-ICT project Semagrow on the efficient discovery of web resources, the FP7-ICT project C2Learn on computational tools fostering human creativity, the H2020 project Your Data Stories on the analysis of open governmental data and their linking to social web. He is also site manager for the H2020 Big Data Europe project for the development of a Big Data Integrator platform, and coordinator of the H2020 Radio project on the use of robots in assisted living environments.

He served for several years in the Board of the Hellenic Artificial Intelligence Society. He has organised or has been committee member of many workshops and conferences, was the Local Chair of the 12th Conf. of the European Chapter of the ACL (EACL-09), co-chair of the 6th Hellenic AI Conference (SETN-10), and organiser of the International Research Summer Schools (IRSS-2013, 2014). He teaches for many years at post-graduate courses on language and knowledge technologies. He is co-founder of the spin-off company 'i-sieve Technologies' that exploited SKEL research work on on-line content analysis. He is currently involved in the founding of the new spin-off company Newsum that exploits SKEL technology on multilingual and multi-document summarization.

Argument Mining from News and the Social Web
Friday, March 27, 2015
George Petasis, PhD
NCSR "Demokritos"

Read More

Hide

Abstract: Argumentation is a branch of philosophy that studies the act or process of forming reasons and of drawing conclusions in the context of a discussion, dialogue, or conversation. Being an important element of human communication, its use is very frequent in texts, as a means to convey meaning to the reader. Locating and identifying arguments may provide valuable help in identifying the various discussion issues, in following on how the discussion has evolved and what the discussion outcome might have been. This process is an aspect of the research field known as computational argumentation, which apart from identifying arguments, also studies the process of reasoning with arguments, and the process of the argumentation among computational agents. This talk focuses on argument mining, the process of identifying arguments in documents. It will provide an overview on argumentation, the structure of arguments, the state-of-art in argument mining from various types of documents, along with recent advances towards argument mining from the social web. Finally, it will present some recent applications of argument mining, especially in the area of e-governance and our experience through the NOMAD project (http://nomad-project.eu/).

Biography: Dr. Georgios Petasis is a research scientist at the Software and Knowledge Engineering Laboratory (SKEL), in the Institute of Informatics and Telecommunications at NCSR "Demokritos", in Athens, Greece. He holds a PhD in Computer Science from University of Athens on the topic of machine learning for natural language processing. His research interests lie in the areas of natural language processing, knowledge representation and machine learning, including information extraction, ontology learning, linguistic resources, grammatical inference, speech synthesis and natural language processing infrastructures. He is the author of the Ellogon natural language engineering platform.

He is a member of the programme committees of several international conferences and he has been involved in more than 15 European and national research projects. Being a visiting professor at University of Patras, he has taught both undergraduate and postgraduate courses. His work has been published in more than 50 international journal, conferences and books. He is the treasurer and a member of the board of the Greek Artificial Intelligence Society (EETN). Finally he is co-founder of "Intellitech", a Greek company specialising in natural language processing.

Millimeter-wave 5G: Harvesting the High Frequencies for a Connected Society
Thursday, March 05, 2015
Jerry Pi
Straight Path Communications Inc.

Read More

Hide

Abstract: Mobile Computing is one of the greatest advances in the history of technology. With the proliferation of smart devices and the explosion of mobile data traffic, leading experts are calling for a 5G system that can provide 1000x capacity increase over 4G. One of the main candidate technologies is millimeter-wave 5G. In this talk, we describe our millimeter-wave 5G vision, and the recent industry and regulatory development regarding this technology. We explain the fundamentals of a millimeter-wave 5G system, including network architecture, air interface system, phased antenna array, hybrid spatial processing, and millimeter-wave channel measurement results. Preliminary performance studies show millimeter wave 5G systems can achieve 20x - 30x capacity improvement over a fully configured 4G LTE system. Together with other promising 5G technologies such as massive MIMO and small cells, millimeter-wave 5G puts the ambitious goal of 1000x capacity increase within striking range.

Biography: Jerry Pi is the Chief Technology Officer of Straight Path Communications Inc., a leading communication asset company with one of the largest 39 GHz and 28 GHz spectrum portfolios in the United States. He leads the mobile communication technology strategy and R&D that maximize the value of these spectrum assets. Prior to joining Straight Path, Jerry was a Senior Director at Samsung Research America in Dallas, Texas, where he led system research, standardization, and prototyping activities in 4G and 5G. Jerry pioneered the development of millimeter wave 5G with the world's first invention and first journal article on millimeter wave mobile communication. He also led the development of the world's first 5G baseband and RF system prototype that successfully demonstrated the feasibility of 5G mobile communication at 28 GHz. Before joining Samsung in 2006, he was with Nokia Research Center in Dallas and San Diego, where he was a leading contributor to Nokia's 3G wireless standardization and modem development. He has authored more than 30 technical journal and conference papers and is the inventor of more than 150 patents and applications. He holds a B.E. degree from Tsinghua University (with honor), a M.S. degree from the Ohio State University, and an MBA degree from Cornell University (with distinction). He is a Senior Member of IEEE.

Anomaly Detection in Co-evolving Data Streams
Friday, October 24, 2014
Jing He, PhD
College of Engineering and Science
Victoria University, Australia
http://www.vu.edu.au/contact-us/jing-he

Read More

Hide

Abstract: Detecting/predicting anomalies from multiple correlated data streams is valuable to those applications where a credible real-time event prediction system will minimise economic losses (e.g. stock market crash) and save lives (e.g. medical surveillance in the operating theatre). This talk will introduce an effective and efficient methods for mining the anomalies of correlated multiple and co-evolving data streams in online and real-time manner. It includes the detection/prediction of anomalies by analysing differences, changes, and trends in correlated multiple data streams. The predicted anomalies often indicate the critical and actionable information in several application domains.

Biography: Dr. Jing He is an associate professor in college of engineering and science at Victoria University, Australia. She has been awarded a PhD degree from Academy of Mathematics and System Science, Chinese Academy of Sciences in 2006. Prior to joining to Victoria University, she worked in University of Chinese Academy of Sciences, China during 2006 to 2008. She has been active in areas of Data Mining, Web service/Web search, Spatial and Temporal Database, Multiple Criteria Decision Making, Intelligent System, Scientific Workflow and some industry field such as E-Health, Petroleum Exploration and Development, Water recourse Management and e-Research. She has published over 60 research papers in the refereed international journals and conference proceedings including ACM transaction on Internet Technology (TOIT), IEEE Transaction on Knowledge and Data Engineering (TKDE), Information System, The Computer Journal, Computers and Mathematics with Applications, Concurrency and Computation: Practice and Experience, International Journal of Information Technology & Decision Making, Applied Soft Computing, and Water Resource Management. Her research is supported by Australian Research Council (ARC) with ARC early career researcher award (DECRA), ARC discovery project, ARC Linkage project and National Natural Science Foundation of China (NSFC) since 2008.

Towards Accurate Analysis of Noisy Data
Friday, October 17, 2014
Sanjukta Bhowmick, PhD
University of Nebraska at Omaha

Read More

Hide

Abstract: Analysis of vast amounts of information, popularly known as "Big Data", has become an ubiquitous operation in many disciplines. However, many analysis methods overlook the fact that most real-world data inherently contain some amount of inaccuracy or noise. The presence of noise can negatively affect the correctness of the obtained results. My talk focuses on efficient sequential and parallel algorithms for analyzing such noisy data. Using examples from community detection and centrality metrics, I will demonstrate how it is imperative to measure the well-posedness, sensitivity and stability of the analysis problem to obtain reliable results. I will discuss some of our ongoing work, in particular the utility of a new metric called "permanence", to understand and reduce the effect of noise and provide accurate results.

Biography: Sanjukta Bhowmick is an Assistant Professor in the College of Information Science and Technology at the University of Nebraska at Omaha. She received her Ph.D. in Computer Science from the Pennsylvania State University. Her core research area is in high performance computing with a focus on the synergy of combinatorial and numerical methods. Her current projects focus on designing parallel, efficient and robust algorithms for analyzing large-scale noisy networks. Her work has been funded by NSF (EPSCoR), NIH (INBRE), AFRL, and State of Nebraska.

Expeditions in Applied Distributed Computing: Towards the Next-generation of Distributed Cyberinfrastructure
Friday, October 03, 2014
Shantenu Jha, PhD
Rutgers University

Read More

Hide

Abstract: To support science and engineering applications that are the basis of many societal and intellectual challenges in the 21st Century, there is a need for comprehensive, balanced and flexible distributed cyberinfrastructure. The process of designing and deploying such large scale DCI however, presents a critical and challenging research agenda along at least two different dimension: conceptual and implementation challenges.

The first is the ability to architect and federate large scale distributed resources so as to have both predictable performance of the collective infrastructure and the ability to plan and reason about executing distributed workloads. The second - implementation challenge - is to produce tools that provide a step change in the sophistication and scale of problems that can be investigated using DCI, while being extensible, easy to deploy and use, as well as being compatible with a variety of other established tools.

In the first part of the talk we will discuss how we are laying the foundations for the design and architecture of the next-generation of distributed cyberinfrastructure. In the second part of the talk, we will introduce RADICAL-Cybertools -- a standards-based, abstraction-driven approach to High-Performance Distributed Computing. RADICAL Cybertools builds upon important theoretical advances, production software development best practices and carefully analyzed usage and programming models. We will discuss several science and engineering applications that are currently using RADICAL Cybertools to utilize DCIs in a scalable and extensible fashion. We will conclude with a discussion of the connection between the two challenges.

Biography: Shantenu Jha is an Associate Professor of Computer Engineering at Rutgers University. His research interests lie at the triple point of Cyberinfrastructure R&D, Applied Computing and Computational Science. Before moving to Rutgers, he was the lead for Cyberinfrastructure Research and Development at Louisiana State University.

His research is currently supported by DOE and multiple NSF awards, including CAREER, SI2 (elements, integration and conceptualization), CDI and EarthCube. Shantenu leads the RADICAL-Cybertools project which are a suite of standards-driven and abstractions-based tools used to support large-scale science and engineering applications. He is co-leading a "Conceptualization of an Institute for Biomolecular Simulations" project. He is also designing and developing "MIDAS: Middleware for Data-intensive Analytics and Science" as part of a HPBDS (http://arxiv.org/abs/1403.1528) -- a 2014 NSF DIBBS project.

Away from work, Jha tries middle-distance running and biking, tends to indulge in random musings as an economics-junky, and tries to use his copious amounts of free time with a conscience.

Verified Switched Control System Design using Real-Time Hybrid Systems Reachability
Friday, September 26, 2014
Stanley Bak, PhD
Air Force Research Lab

Read More

Hide

Abstract: The Simplex Architecture ensures the safe use of an unverifiable complex controller by using a verified safety controller and verified switching logic. This architecture enables the safe use of high-performance, untrusted, and complex control algorithms without requiring them to be formally verified. Simplex incorporates a supervisory controller and safety controller that will take over control if the unverified logic misbehaves. The supervisory controller should (1) guarantee the system never enters and unsafe state (safety), but (2) use the complex controller as much as possible (minimize conservatism).

The problem of precisely and correctly defining this switching logic has previously been considered either using a control-theoretic optimization approach, or through an offline hybrid systems reachability computation. In this work, we prove that a combined online/offline approach, which uses aspects of the two earlier methods along with a real-time reachability computation, also maintains safety, but with significantly less conservatism. We demonstrate the advantages of this unified approach on a saturated inverted pendulum system, where the usable region of attraction is 227% larger than the earlier approach.

Biography: Stanley Bak received a Bachelor's degree in Computer Science from Rensselaer Polytechnic Institute (RPI) in 2007 (summa cum laude), and a Master's degree in Computer Science from the University of Illinois at Urbana-Champaign (UIUC) in 2009. He completed his PhD from the Department of Computer Science at UIUC in 2013, and has since joined the Air Force Research Lab, Information Directorate in Rome, NY in August 2013. He received the Founders Award of Excellence for his undergraduate research at RPI in 2004, the Debra and Ira Cohen Graduate Fellowship from UIUC twice, in 2008 and 2009, and was awarded the Science, Mathematics and Research for Transformation (SMART) Scholarship from 2009 to 2013. His main research interests include hybrid systems verification, validated solutions for ordinary differential equations (ODEs), distributed safety and progress for networked cyber-physical systems, and real-time and embedded systems.

Building Robust Systems for the Energy Constrained Future: Application and Algorithm Aware Approaches
Friday, April 18, 2014
Joseph Callenes-Sloan

Read More

Hide

Abstract: As late-CMOS process scaling leads to increasingly variable circuits/logic and as most post-CMOS technologies in sight appear to have largely stochastic characteristics, hardware reliability has become a first order design concern. High Performance Computing (HPC) systems today can see computational errors at rates of once a week to once every 4-5 hours. At Lawrence Livermore National Laboratory (LLNL), for example, L1 cache soft errors occurred about once every five hours on the 104K node BlueGene/L system. As the number of devices per system scales into the millions, errors will become even more pervasive. In fact, a recent DARPA study on exa- scale computing found that evolutionary extensions of today's HPC systems (CrayXT, BlueGene) will be unable to reach exaFLOP performance by 2020 within a power budget of 20MW, the typical limit of modern computing centers. For these systems, the study observes that "traditional resiliency solutions will not be sufficient". To make matters worse, emerging computing systems (e.g. mobile and HPC) are becoming increasingly power constrained. Traditional hardware/software approaches are likely to be impractical for these power constrained systems due to their heavy reliance on redundant, worst-case, and conservative designs. Instead, we investigate how we can leverage inherent application and algorithm characteristics (e.g. natural error resilience, spatial and temporal reuse, and fault containment) to build more efficient robust systems. In this talk, I will describe algorithmic and architectural approaches that leverage application and algorithm-awareness for building such systems. These approaches include a) a numerical optimization-based methodology for converting applications into a more error tolerant form b) application-specific techniques for low-overhead fault detection and correction, and c) hardware support to leverage application-level error resilience. Our studies show that application and algorithm-awareness can significantly increase the robustness of computing systems, while also reducing the cost of meeting reliability targets.

Biography: Joseph Callenes-Sloan received a B.S. degree in electrical engineering and a B.S. degree in computer engineering from Iowa State University in 2007, and a M.S. and Ph.D. degree in electrical and computer engineering from the University of Illinois at Urbana-Champaign in 2011 and 2013, respectively. In the Fall of 2013 he joined the the Erik Jonsson School of Engineering and Computer Science at the University of Texas at Dallas as an Assistant Professor in the Electrical Engineering Department. His research interests include fault-tolerant computing, high performance and scientific computing, computer architecture, and low-power design. Joseph's research has been recognized by the Yi-Min Wang and Pi-Yu Chung Endowed Research Award, a Best Paper in Session Award at SRC TECHCON 2011, a 2012 ECE/Intel Computer Engineering Fellowship, and has been the subject of several keynote talks, invited plenary lectures, and invited articles. His research also forms a core component of the 2010 NSF Expedition in Computing Award and has been covered by media sources, including BBC News, IEEE Spectrum, and HPCWire.

Automated Time Series Modeling and Pattern Learning for Personalized Healthcare Decision-Making Systems
Friday, March 21, 2014
Shouyi Wang

Read More

Hide

Abstract: With recent advances in information and storage technologies, online monitoring of physiological signal streams and prediction or early warning of hazardous events has become more and more important in many real-time healthcare and safety decision-making applications, such seizure onset of a patient with epilepsy, cognitive distraction detection during dangerous working environments, and chronic disease management and remote patient monitoring. However, physiological signals always have a great intra- and inter-individual variability, and are often nonstationary, irregular, chaotic, and noisy. It is challenging to handle massive sensory data streams and extracting useful information to achieve personalized decision-making. This talk will present two personalized online monitoring and medical decision-making frameworks: pattern-based approach and regression-based approach. Specifically, a new algorithm was developed to model nonstationary time series streams, and a new adaptive pattern-learning framework was proposed to discover personalized predictive physiological patterns that associated with a specific event or state. The pattern-based adaptive learning approach have been successfully applied to solve three challenging real-world problems including online prediction of seizure onset for patients with epilepsy, online prediction of cognitive-distraction for drivers in unfamiliar environments, and cognitive workload identification using multichannel electroencephalogram (EEG) signals. In the second part, a regression- based prediction approach will be discussed. Using pattern analysis of respiratory motion time series signals, a personalized PET/CT service recommendation system was developed for different groups of patients with lung tumors. The general structures of the automated pattern-based and the regression-based prediction framework make them applicable to design various medical decision support systems to provide timely and effective decision-making information, which allows physicians or individuals to take proactive actions to avoid or reduce their risk and prepare for effective responses.

Biography: Shouyi Wang is an assistant professor in Industrial and Manufacturing Systems Engineering at UTA from August 2013. Before joining UTA, he was a research associate at University of Washington from 2011-2013, and worked in the Complex Systems Modeling and Optimization (COSMO) laboratory and the Integrated Brain Imaging Center (IBIC) at UW Medical Center. He earned his B.S. degree in Systems and Control Engineering from Harbin Institute of Technology (China) in 2003, M.S. degree in Systems and Control Engineering from Delft University of Technology (Netherlands) in 2005, and Ph.D. degree in Industrial and Systems Engineering from Rutgers University in 2012. His research interests mainly focus on time series data mining, pattern recognition, machine learning, intelligent healthcare decision-making systems, and interactive human-machine systems. His work is published in top tier journals such as IEEE Transactions on Knowledge and Data Engineering, IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, and Part C: Applications and Reviews, Physics in Medicine and Biology, Pattern Recognition Letters.

Tiramisu: Creating Transit Information via Participatory Sensing
Friday, February 28, 2014
Anthony Tomasic

Read More

Hide

Abstract: Participatory sensing is a type of crowdsourcing that leverages people and their smartphones as a network of sensors. In this talk, I describe Tiramisu, a currently deployed smartphone application that provides real-time public transit information in Pittsburgh, PA. In exchange for real-time bus arrival times, users contribute bus location, bus seat availability and problem reports. Tiramisu is a challenging crowdsourcing system because the half-life of a contribution is short (minutes), compared to Facebook (weeks) or Wikipedia (years). Tiramisu is also a platform for conducting real-world social computing and data science research. Over the last four years, our research team has utilized a variety of methodologies to study Tiramisu - such as surveys of public perception of transit services, laboratory measurement of prototype performance, field trials of prototypes, and A/B testing of user behavior in production systems. We have generated research results in transit arrival time modeling, user incentives to increase participation in crowdsourced systems, and battery conservation when utilizing mobile location services. The talk will provide an overview of our successes and lessons learned. This project has received some attention including an award from Intelligent Transportation Systems of American for innovation and an award from the FCC for advances in accessibility for individuals with disabilities. Our interdisciplinary research team includes co-investigators Yun Huang and Charlie Garrod (Computer Science), Aaron Steinfeld (Human Factors), and John Zimmerman (Interaction Design), as well as many, many students from a range of disciplines. Funding provided by: the US Department of Education through NIDRR (National Institute on Disability and Rehabilitation Research), Traffic21 at Carnegie Mellon University, US DOT through the FTA/RITA SBIR, Technologies for Safe and Efficient Transportation UTC, IBM, Google, US NSF Quality of Life Technology Engineering Research Center, and the SINAIS project, a joint research project between Carnegie Mellon and the University of Madeira.

Biography: Anthony Tomasic is a Senior Systems Scientist at Carnegie Mellon University. His research now focuses on participatory sensing systems and internet accessibility for individuals with disabilities. Previous areas of interest include mixed-initiative interfaces, detection of phishing messages, internet level scaling of database systems, federated databases and information retrieval systems and performance of information retrieval systems. From 1999 to 2003 he participated in various internet start-ups. Anthony has a BS in Computer Science from Indiana University, a PhD in Computer Science from Princeton University, and an MBA from Carnegie Mellon University. For nine years he was director of the Master of Computational Data Science degree at CMU (aka MSIT in Very Large Information Systems). He is also the Managing Partner of Tiramisu Transit, LLC.

Translational Control Design for Lower:Limb Prosthetics: Lessons from Robot Locomotion
Friday, February 14, 2014
Robert Gregg

Read More

Hide

Abstract: High-performance prostheses could significantly improve the quality of life for nearly a million American lower-limb amputees, whose ambulation is slower, less stable, and requires more metabolic energy than that of able-bodied individuals. Although recent motorized prostheses have the potential to restore mobility in this impaired population, critical barriers in control technology still limit their clinical viability. These systems discretize the gait cycle into multiple distinct control models, each tracking reference joint torques, kinematics (angles/velocities), or impedances (stiffness/viscosity) that resemble human behavior. These increasingly complex designs are difficult to tune to individuals and generalize to different tasks, and their sequential controllers are not necessarily robust to external perturbations that push joint kinematics forward or backward in the gait cycle. However, recent bipedal robots can stably walk, run, and climb stairs with a single control model based on virtual constraints, which drive joint patterns as functions of a mechanical variable that continuously represents the robot?s progression through the gait cycle, i.e., a sense of "phase." These breakthroughs in robot control theory present an emerging opportunity to address a key roadblock in prosthetic and orthotic control technology, which will be the topic of this talk. I will provide evidence that the Center of Pressure (COP) in the plantar sole serves as a phase variable in human locomotion, by which the neuromuscular system represents the phase of the gait cycle. A unifying prosthesis controller will then be designed to enforce biomimetic virtual constraints between the COP and joint angles, known in the prosthetics field as the "effective shapes" of the stance leg during walking. Recent experiments with above-knee amputee subjects using a powered prosthetic leg will be presented and future research directions will be discussed!

Biography: Robert D. Gregg IV received the B.S. degree (2006) in electrical engineering and computer sciences from the University of California, Berkeley and the M.S. (2007) and Ph.D. (2010) degrees in electrical and computer engineering from the University of Illinois at Urbana-Champaign. He is an Assistant Professor of Mechanical Engineering and Bioengineering at the University of Texas at Dallas and the Director of the Locomotor Control Systems Laboratory. Prof. Gregg was previously a Research Scientist at the Rehabilitation Institute of Chicago and an Engineering into Medicine Fellow at Northwestern University. His research concerns the control mechanisms of bipedal locomotion with application to both wearable and autonomous robots. Prof. Gregg is a recipient of the NIH Director's New Innovator Award and the Burroughs Wellcome Fund's Career Award at the Scientific Interface. He also received the Best Technical Paper Award of the 2011 CLAWAR conference, the 2009 O. Hugo Schuck Award of the IFAC American Automatic Control Council, and the Best Student Paper Award of the 2008 American Control Conference. Dr. Gregg is a member of the IEEE Control Systems Society and the IEEE Robotics & Automation Society.

mHealth, Chronic Disease, and Phones-as-a-sensor Technology
Friday, January 31, 2014
Eric Larson

Read More

Hide

Abstract: The mHealth "revolution" has promised to deliver in-home healthcare that parallels the care we might receive in a physician's office. However, the panacea of digital health has proven to be more problematic and messy than its vision, especially for collecting and interpreting medical quantities from the home. In this talk I will discuss several successful projects for sensing medical quantities from a mobile phone using the embedded sensors (i.e., camera, microphone, accelerometer) and how these projects can increase compliance as well as enhance doctor patient relationships. I will focus on the reliability and calibration of the sensing and the role of computer scientists and engineers in the future of mHealth.

Biography: Eric C. Larson is an Assistant Professor in the department of Computer Science and Engineering in the Bobby B. Lyle School of Engineering, Southern Methodist University. His main research interests are in machine learning, sensing, and signal & image processing for ubiquitous computing applications, in particular, for healthcare and environmental sustainability applications. His work in both areas has been commercialized and he holds a variety of patents for sustainability sensing and mobile phone-based health sensing. He is active in signal processing education for computer scientists and is an active member of IEEE. He received his Ph.D. in 2013 from the University of Washington, where he was co-advised by Shwetak N. Patel and Les Atlas. He received his B.S. and M.S. in Electrical Engineering in 2006 and 2008, respectively, at Oklahoma State University, where he was advised by Damon Chandler.

Careers in Computing: How to Prepare and What to Expect
Friday, January 24, 2014
Dennis Frailey

Read More

Hide

Abstract: Many college students concentrate on getting a job instead of preparing for a career. In a rapidly changing field like computing, this can lead to frequent job changes, burnout and dissatisfaction. Furthermore, today we hear a lot about outsourcing and some wonder whether there will be computing careers in the future. This talk is based on the premise that the jobs will certainly change over time but there will be computing careers for a long time. Dr. Frailey shows how to prepare for a life-long career in computing, covering such topics as where the opportunities are, what it?s like to work in the computing field in a large, professionally run computing organization, how the field of computing has changed over the years, how it is likely to change in the future, and what hasn?t changed. The talk also addresses what employers look for when hiring people in the computing field and what it takes to have a successful career.

Biography: Dennis is a recently retired Principal Fellow at Raytheon Company in Plano, Texas. He still teaches software engineering and computer science as an Adjunct Professor of Computer Science and Computer Engineering at Southern Methodist University (SMU). At Raytheon, Dennis was a leader in software engineering improvement as well as a specialist in software measurement and cycle time reduction. He was also an instructor in several internal Raytheon courses for project managers and software managers and in past assignments served as a software project manager, computer architect, operating system designer, compiler designer, and speechwriter for company executives. Dennis previously worked at Texas Instruments, the Ford Motor Company, and as a tenured, Associate Professor at SMU where he helped start both the computer science and software engineering programs. Professionally, Dennis is a member of the IEEE Computer Society Board of Governors, vice-chair of the IEEE-CS Education Activities Board, chair of the Industry Advisory Board to the Texas Board of Professional Engineers, author of the software management portion of SWEBOK – the Guide to the Software Engineering Body of Knowledge, and an ABET accreditation evaluator in computer science, computer engineering and software engineering. Previously he was a member of the Computer Science Accreditation board of directors, ACM national vice president, ACM regional representative, chair of the Purdue University ACM student chapter, and Chair of the Dallas Association for Software Engineering Excellence. He holds an MS and PhD in computer science (Purdue) and a BS in mathematics (Notre Dame). He was born in Tulsa.

New Adversary Models for Censorship Circumvention Schemes
Monday, December 09, 2013
Nicholas Hopper

Read More

Hide

Abstract: Internet censorship is the widespread practice, by state and corporate entities, of blocking access to some kinds of internet content deemed objectionable. Large-scale internet censors like the Chinese and Iranian governments have been engaged in an arms race with circumvention systems such as the Tor network that seek to allow users to circumvent this blocking and access arbitrary Internet content. This talk will discuss the current state of this arms race, some "next steps" proposed by circumvention researchers, and counterattacks that we should anticipate from Internet censors. In addition, we'll speculate about some new approaches that can withstand these counterattacks.

Biography: Nicholas Hopper is an Associate Professor of Computer Science & Engineering at the University of Minnesota, and the 2013-2014 Visiting Research Director for the Tor Project. He received a B.A. from the University of Minnesota in 1999 and a Ph.D. in Computer Science from Carnegie Mellon University in 2004. His research interests include online privacy, applied cryptography, and computer security.

Feature Engineering for Predictive Modeling with Large Scale Electronic Medical Records: Augmentation, Densification and Selection
Friday, December 06, 2013
Fei Wang

Read More

Hide

Abstract: Predictive modeling lies in the heart of many medical informatics problems, such as early detection of some chronic diseases and patient hospitalization/readmission prediction. The data those predictive models are built upon are Electronic Medical Records (EMR), which are systematic collection of patient information including demographics, diagnosis, medication, lab tests, etc. We refer those information as patient features. High quality features are of vital importance to building successful predictive models. However, the features extracted directly from EMRs are typically noisy, heterogeneous and very sparse. In this talk, I will present a feature engineering pipeline on how to construct effective features from those EMRs, which includes three steps: (1) feature augmentation, constructing more effective derived features based on existing features; (2) feature densification, imputes the missing feature values; (3) feature selection, identify the most representative and predictive features. I will also show the empirical results on predictive modeling for the onsets of real world Congestive Heart Failure patients to demonstrate the advantages of the proposed pipeline.

Biography: Dr. Fei Wang is currently a research staff member in Healthcare Analytics Research group, IBM T. J. Watson Research Center. He got his Ph.d. from Department of Automation, Tsinghua University in 2008. Dr. Wang’s major research interests include machine learning, data mining, social informatics and healthcare informatics. He has published over 100 papers on the top venues of the relevant fields. Dr. Wang served on the program committee member/chair of many international conferences and workshops, and reviewer/guest editor of many reputable journals. He gave tutorials on different topics on CIKM2008, SDM 2009, ICDM 2009, SDM 2012 and SDM 2013. His research paper was selected as the recipient of the “Honorable mention of the best research paper award” in ICDM 2010, and best research paper finalist in SDM 2011. More information can be found on his homepage at https://sites.google.com/site/feiwang03/.

Optimal Dissemination on Graphs: Theory and Algorithms
Friday, December 06, 2013
Hanghang Tong

Read More

Hide

Abstract: Big graphs are prevalent and are becoming a popular platform for the dissemination of a variety of information (e.g., viruses, memes, opinions, rumors, etc). In this talk, we focus on the problem of optimally affecting the outcome of dissemination by manipulating the underlying graph structure. We aim to answer two questions: (1) what are the key graph parameters for the so-called tipping point? and (2) how can we design effective algorithms to optimize such parameters in a desired way? We show that for a large family of dissemination models, the problem becomes optimizing the leading eigen-value of an appropriately defined system matrix associated with the underlying graph. We then present two algorithms as the instantiations of such an optimization problem - one to minimize the leading eigen-value (e.g., stopping virus propagation) and the other to maximize the eigen-value (e.g., promoting product adoption). If time allowed, I will also introduce our other work on analyzing big graphs.

Biography: Hanghang Tong is currently an assistant professor at Computer Science Department, City College, City University of New York. Before that, he was a research staff member at IBM T.J. Watson Research Center and a Post-doctoral fellow in Carnegie Mellon University. He received his M.Sc and Ph.D. degree from Carnegie Mellon University in 2008 and 2009, both majored in Machine Learning. His research interest is in large scale data mining for graphs and multimedia. His research has been funded by NSF, DARPA and ARL. He has received several awards, including best paper award in CIKM 2012, best paper award in SDM 2008 and best research paper award in ICDM 2006. He has published over 70 referred articles and more than 20 patents. He has served as a program committee member in top data mining, databases and artificial intelligence venues (e.g., SIGKDD, SIGMOD, AAAI, WWW, CIKM, etc).

Verification and Validation for Reliable Cyber-Physical Systems
Friday, November 22, 2013
Taylor Johnson

Read More

Hide

Abstract: Computer)related defects in embedded systems are rampant as exemplified by frequent product recalls in industries like automotive, healthcare, and industrial control systems. Defects in such cyber) physical systems (CPS) often result from the interaction of cyber and physical components of the systems. In this talk, I will first review some recent recalls by agencies like the National Highway Traffic Safety Agency (NHTSA), the Food and Drug Administration (FDA), and the Consumer Product Safety Commission (CPSC) to illustrate examples of cyber)physical defects and their root causes. Next, I will overview our research contributions developing verification and validation analysis techniques and software tools for CPS. Finally, I will conclude with several cyber)physical systems under development in our lab, such as a modular networked turbidostat and a networked sun)tracking solar array using a multilevel converter as a grid)tie interface. These prototype cyber)physical systems are being used as case studies for evaluating our verification and validation methods.

Biography: Taylor T. Johnson is an Assistant Professor of Computer Science and Engineering at the University of Texas at Arlington. Taylor completed his PhD in Electrical and Computer Engineering at the University of Illinois at Urbana)Champaign in 2013, where he worked in the Coordinated Science Laboratory with Prof. Sayan Mitra. Taylor completed his MSc at Illinois in 2010, earned a BSEE from Rice University in 2008, and was a visiting research assistant at the Air Force Research Laboratory's Space Vehicles Directorate at Kirtland Air Force Base in 2011. Taylor worked in industry for Schlumberger at various times between 2005 and 2010 helping develop new downhole embedded control systems. Taylor's research focus is developing algorithmic techniques and software tools to improve the reliability of cyber)physical systems. Taylor has published over a dozen papers on these methods and their applications in areas like power and energy systems, aerospace, and robotics, two of which were recognized with best paper awards. Taylor's research advances and applies techniques and tools from control theory, embedded systems, formal methods, and software engineering.

Geometric Approaches to Assistive Technologies and Related Problems
Friday, November 08, 2013
Ovidiu Daescu

Read More

Hide

Abstract: In this talk I will introduce some geometric methods that could be used successfully to develop assistive technologies to improve human performance. Examples will include medical procedures such as using flexible needles to extract tissue samples for biopsies and brachytherapy, efficiently locating medical providers in an area specified by the user, isolating friends from foes on the battlefield, and optimizing treatment planning and delivery in radiation therapy.

Biography: Ovidiu Daescu got his bachelor degree in Computer Science and Automation from the Technical Military Academy, Bucharest, Romania, and master and PhD degrees in Computer Science and Engineering from the University of Notre Dame, IN, USA, in 1997 and 2000, respectively. Currently, he is Assistant Department Head and Professor in the Department of Computer Science of the University of Texas at Dallas. His research interests include the design of sequential and parallel algorithms, geometric and bio-­?medical computing, and assistive technologies to improve human performance. His research has appeared in prestigious journals and conferences and is supported by the National Science Foundation. He has been program committee member and reviewer for many conferences and journals, has received a Certificate of Recognition from the Journal of Computational Geometry Theory and Applications in 2008 and a Certificate of Appreciation from the Game Engineering Conference in 2009.

Designing a Tele-Rehabilitation System in an Augmented Virtual Reality Environment
Friday, November 01, 2013
Balakrishnan Prabhakaran

Read More

Hide

Abstract: 3D Tele-­Immersion environments provide a new medium for human interactions and collaborations. With the addition of touch or force feedback sensors to a 3DTI environment, new avenues are being explored for a Tele-­ rehabilitation system in an augmented virtual reality environment. In the past few years, advances have been made in various technologies such as 3D cameras, body sensor networks, and high precision haptic devices. These sensors along with powerful processing and communication capabilities have led to a very immersive experience for users of such systems. In this paper we describe such a system with Microsoft Kinect cameras and haptic devices in a Tele-­rehabilitation setup. Less than ideal network conditions lead to a bad Quality of Experience (QoE) for users. We introduce some metrics that are used to evaluate the QoE in such systems. We also describe a layered architecture that houses our solutions which uses those metrics to deal with the QoE issues. These solutions come together to provide a better experience for the users.

Biography: Dr. B. Prabhakaran is a Professor in the faculty of Computer Science Department, University of Texas at Dallas. Dr. Prabhakaran received the prestigious NSF CAREER Award FY 2003 for his proposal on Animation Databases. Dr. Prabhakaran is General Co-Chair of ACM International Conference on Multimedia Retrieval 2013 (ICMR 2013). He was a General Co-chair of ACM Multimedia 2011, and a Technical Program Co-Chair of IEEE WoWMoM 2012 (World of Wireless, Mobile, and Multimedia Networks). He served as the TPC Co-Chair of IEEE ISM 2010 (International Symposium on Multimedia). Dr. Prabhakaran is a Member of the Executive Council of the ACM Special Interest Group on Multimedia (SIGMM) and is the Co-Chair of IEEE Technical Committee on Multimedia Computing (TCMC) Special Interest Group on Video Analytics (SIGVA). Dr Prabhakaran is the Editor-­in-­Chief of the ACM SIGMM (Special Interest Group on Multimedia) web magazine. He is Member of the Editorial board of Multimedia Systems Journal (Springer) and Multimedia Tools and Applications journal (Springer). He has served as guest-editor (special issue on Multimedia Authoring and Presentation) for ACM Multimedia Systems journal. Prof Prabhakaran's research has been funded by Federal Agencies such as the National Science Foundation (NSF) and USA Army Research Office (ARO). He is currently the Principal Investigator of a $2.4 million NSF research grant that involves 8 researchers from different disciplines. This research project explores multi-modality in 3D Tele-Immersion. He has also received generous research funding from industries, research laboratories and consortiums such as QuEST Forum, Texas Instruments, Alcatel-Lucent, Texas Emerging Technology Fund, Texas Medical (TexMed) Consortium. In effect, Prof Prabhakaran has contributed to nearly $10 million in research funding in the last several years. Dr. Prabhakaran is an ACM Distinguished Scientist.

Magnetic capsule robots for gastrointestinal endoscopy and abdominal surgery
Friday, October 25, 2013
Pietro Valdastri

Read More

Hide

Abstract: The talk will move from capsule robots for gastrointestinal endoscopy toward a new generation of surgical robots and devices, having a relevant reduction in invasiveness as the main driver for innovation. Wireless capsule endoscopy has already been extremely helpful for the diagnosis of diseases in the small intestine. Specific wireless capsule endoscopes have been proposed for colon inspection, but have never reached the diagnostic accuracy of standard colonoscopy. In the first part of the talk, we will discuss enabling technologies that have the potential to transform colonoscopy into a painless procedure. These technologies include magnetic manipulation of capsule endoscopes, real?time pose tracking, and intermagnetic force measurement. The second part of the talk will give an overview about the development of novel robotic solutions for single incision robotic surgery. In particular, a novel surgical robotic platform based on local magnetic actuation will be presented as a possible approach to further minimize access trauma. The final part of the talk will introduce the novel concept of intraoperative wireless tissue palpation, presenting a capsule that can be directly manipulated by the surgeon to create a stiffness distribution map in real?time. This stiffness map can then be used to guide tissue resection with the goal of minimizing the healthy tissue being removed with the tumor.

Biography: Pietro'Valdastrireceived the Master's degree in Electronic Engineering from University of Pisa, Italy, in 2002, and the Ph.D. in Biomedical Engineering from Scuola Superiore Sant'Anna, Pisa, in 2006. After spending three years as Assistant Professor of Biomedical Robotics at Scuola Superiore Sant'Anna, in August 2011 he moved his research to Vanderbilt University, where he is now Assistant Professor of Mechanical Engineering, with a secondary appointment in the Division of Gastroenterology, and Director of the STORM Lab (https://my.vanderbilt.edu/stormlab). His research is focused on the design and creation of mechatronic and self-contained devices to be used inside specific districts of the human body to detect and cure diseases in a non?invasive way. He had extensively used magnetic fields to manipulate and control wireless and soft?tethered meso?scale robots inside body cavities, such as the gastrointestinal tract and the abdomen. His research has been published in more than 55 peer?reviewed journal papers, and has recently received the "Best Technology Award" at the 19th Int. Congress of the Europ. Assoc. of Endosc. Surg., the "Best Oral Presentation Award" at the 2011 Hamlyn Symp. of Med. Rob., the "3in5 Competition Award" at the 2012 ASME Design of Med.Devic. Conf., and the "OLYMPUS ISCAS Best Paper Award" at the 16th Annual Conf. of the Int. Soc.for Computer Aided Surgery.

Quantifying and Enhancing Surgical Performance
Friday, October 11, 2013
Gregory Hager

Read More

Hide

Abstract: With the rapidly growing popularity of the Intuitive Surgical da Vinci system, robotic minimally invasive surgery (RMIS) has crossed the threshold from the laboratory to the real world. However, I believe this system is just the beginning of a larger paradigm shift toward more quantitative and computationally-enhanced interventional medicine.

In this talk, I will first provide a brief overview of a variety of projects that we are pursing, all of which have the common theme of enhancing information, visualization, and physical performance of the surgeon. One area of work is the use of computer vision methods to perform direct video to CT registration for improved visualization and surgical awareness. A related area is the automated construction of 3D models of anatomy from traditional endoscopic imagery.

In the second part of the talk, I will highlight our work aimed at developing statistical methods for modeling RMIS. Using techniques borrowed from speech and language, we consider surgery to be composed of a set of identifiable tasks which themselves are composed of a small set of reusable motion units that we call "surgemes." By creating models of this "Language of Surgery," we are able to evaluate the style and efficiency of surgical motion. These models also lead naturally to methods for effective training of RMIS using automatically learned models of expertise, and toward methods for supporting or even automating component actions in surgery.

Biography: Gregory D. Hager is a Professor and Chair of Computer Science at Johns Hopkins University and the Deputy Director of the NSF Engineering Research Center for Computer Integrated Surgical Systems and Technology. His current research interests include time?series analysis of image data, image?guided robotics, medical applications of image analysis and robotics, and human-computer interaction. He is the author of more than 280 peer?reviewed research articles and books in the area of robotics and computer vision. In 2006, he was elected a fellow of the IEEE for his contributions in Vision-Based Robotics. He is also on the governing board of the International Federation of Robotics Research and is the Chair Elect of the Computing Community Consortium.

Professor Hager received the BA degree, summa cum laude, in computer science and mathematics from Luther College in 1983, and the MS and PhD degrees in computer science from the University of Pennsylvania in 1985 and 1988, respectively. From 1988 to 1990, he was a Fulbright junior research fellow at the University of Karlsruhe and the Fraunhofer Institute IITB in Karlsruhe, Germany. From 1991 until 1999, he was with the Computer Science Department at Yale University. In 1999, he joined the Computer Science Department at JHU.

A Guide to Reducing Energy Consumption in Data Centers
Friday, September 27, 2013
Pradeep Shenoy

Read More

Hide

Abstract: Computer servers have reached a point where the cost of electricity over the life of a server is greater than the cost of the server itself. This motivates a reexamination of the system paradigm used in data centers. This talk presents an overview of the power delivery system in today’s data centers and highlights many of the common sources of excess energy consumption. Several emerging approaches to powering servers are compared. Some approaches enable extremely high energy efficiency but may require rethinking software design and management. Future computing systems will demand an informed, interdisciplinary approach to tackle the energy challenge.

Biography: Pradeep Shenoy joined Kilby Labs at Texas Instruments in May 2012. His focus area is energy conversion and system design. He has previously worked in Caterpillar’s Electric Power Division and in Texas Instrument’s Systems and Applications Lab. He participated in the National Science Foundation’s East Asia and Pacific Summer Institutes program conducting research at Tsinghua University, Beijing, China. He received the B.S. degree in electrical engineering from the Illinois Institute of Technology and the M.S. and Ph.D. degrees in electrical engineering from the University of Illinois, UrbanaRChampaign. He received the Illinois International Graduate Achievement Award in 2010.

Anonymity and Privacy on the Internet
Friday, September 27, 2013
Eric Chan-Tin

Read More

Hide

Abstract: In this talk, circuit clogging attacks on Tor will be revisited and shown to still work on the current Tor network. Tor is a popular anonymizing network used by over 500,000 people daily. This allows the proxies used in a Tor connection to be identified which can be used to leak information about the client. The second part of the talk will focus on identifying the web browsers used by looking only at network traffic (encrypted or plain-text).

Biography: Dr. Eric Chan-Tin received his Ph.D. in 2011 from the University of Minnesota. Dr. Chan-Tin is currently an assistant professor in the Computer Science department at Oklahoma State University. He has done work in anonymity and privacy, botnets, network and distributed system security. His current research areas are computer and network security, cloud computing, mobile security, anonymity, and privacy.

Decision-Making in Large-Scale Dynamical Networks: Modeling, Evaluation, Estimation and Control
Friday, September 20, 2013
Yan Wan

Read More

Hide

Abstract: Decision-support technologies are badly needed in many large-scale network applications (e.g., air traffic management, virus spread control, sensor networking, and information system management). Real time decision-making in these applications is very challenging, due to large network size, complex network structure, and intricate intertwined dynamics. Furthermore, uncertain environmental impact significantly complicates decision-making procedures in realistic settings. My research is concerned with developing general tools for crucial decision-making tasks in large-scale dynamical networks under uncertainty. In particular, I have been focused on four deeply-coupled directions: 1) modeling and abstraction of uncertain environmental impact and large-scale network dynamics to permit tractable evaluation and control, 2) estimation of network state and structure based upon smartly collected measurement data, 3) effective system performance evaluation under uncertainty, and 4) design and decentralized control of large-scale network dynamics. In pursuing these directions, I utilize the tools from a variety of fields, including decentralized control, algebraic graph theory, stochastic systems analysis, optimization, numerical simulation, and information theory. In this talk, I will first motivate the decision-making problems common to several applications, and then provide an overview of my development in the above four directions, with the aim of enabling effective real-time decision-making in large-scale networks.

Biography: Dr. Yan Wan is currently an Assistant Professor in the Department of Electrical Engineering at the University of North Texas. She received her Ph.D. degree in Electrical Engineering from Washington State University in May 2009. After that, she worked briefly as a postdoctoral scholar in the Control Systems program at the University of California at Santa Barbara. Dr. Wan’s research interest lies in developing tools for decision-making tasks in large-scale networks, with applications to air traffic management, sensor networking, airborne networks, systems biology, complex information systems, etc. She has authored and co-authored more than 90 publications. Her research has been funded by NSF, FAA, the MITRE Corporation, NIST, and IEEE Control Systems Society. She was the recipient of the prestigious William E. Jackson Award in 2009, presented by Radio Technical Commission for Aeronautics (RTCA). More information of her research can be founded at www.ee.unt.edu/public/wan.

Graph Algorithms on the Cray XMT-2
Wednesday, September 11, 2013
Dr. Shahid Bokhari

Read More

Hide

Abstract: The Cray XMT-2 (Extreme Multithreading) supercomputer is the latest incarnation of the Tera architecture (1998) and traces its lineage back to the HEP (1982). The machine has hardware support for 128 threads per processor, a flat shared memory without locality, and individually lockable 64-bit words. Currently, the largest machine available has 128 processors and 4 Terabytes of shared memory. It is thus very suitable for implementation of graph algorithms that require "unstructured" access to a large memory space. I will describe my experiences with implementing algorithms on this architecture. Examples include DNA sequencing (using de Bruijn graphs), influenza virus evolution (shortest trees) and image segmentation (maxflow-mincut). In each case I will show that there are no issues of problem partitioning or load balancing and that good performance can be obtained using ordinary C code with the addition of a few pragmas and machine intrinsics. The end result is that the user sees a familiar C/C++ programming environment into which the implementation details of parallelism intrude very occasionally, if at all. The XMT thus permits the programmer to be highly productive without having to be concerned with arcane details of the parallel architecture. The ease of programming of the XMT has lessons for the current crop of commodity multicore/multiGPU systems.

Biography: Dr Shahid Bokhari received the BSc degree in Electrical Engineering from the University of Engineering & Technology (UET), Lahore, Pakistan, in 1974 and the MS and PhD degrees in Electrical and Computer Engineering from the University of Massachusetts, Amherst, in 1976 and 1978. Dr. Bokhari was with the Department of Electrical Engineering, UET, from 1980 to 2006. He joined as assistant professor and was promoted to associate professor in 1982 and professor in 1988. During this period he was active in introducing courses in computer engineering, at the undergraduate and graduate levels, serving as director of postgraduate studies and setting up several large Linux laboratories. He has been associated with the Institute for Computer Applications in Science & Engineering (ICASE) at NASA Langley Center in Hampton, Virginia, where he spent a total of nearly seven years as visiting scientist or consultant over the period 1978-1998. He was a Visiting Scholar (2004-2008) and, later, Research Professor (2009-2012) at the Department of Biomedical Informatics, Ohio State University, Columbus, Ohio. He is now an Independent Researcher and Consultant located in Silicon Valley. Other institutions that he has been associated with as a visitor include the Universities of Colorado, Stuttgart, and Vienna, and the Electrotechnical Laboratory in Tsukuba, Japan.

Dr. Shahid Bokhari is a Fellow of the IEEE (1997), Fellow of the ACM (2000), and is a member of IEEE-Computer Society. He was placed on the ISI Highly Cited Researchers list in Computer Science in 2003, making him one of the top 250 researchers in the field. Dr. Bokhari's research interests include Parallel and Distributed Computing. His work on partitioning, assignment and mapping problems is well-known and has led to numerous practical applications and theoretical extensions. He is presently working on massively parallel algorithms for problems in Computational Biology and Bioinformatics. He is particularly interested in parallel algorithms for DNA alignment and assembly. He has developed a new graph theoretic model for analyzing the evolution of segmented viruses such as influenza A. He recently implemented a parallel maximum flow algorithm for image segmentation on the Cray XMT-2 (Extreme Multithreading) Supercomputer.

Leveraging Social Networks for Improved Anonymity and P2P Systems
Friday, September 06, 2013
Matthew Wright

Read More

Hide

Abstract: Social networks are great for connecting with other people, but they can also be leveraged for enhanced security properties. In this talk, I will describe two systems -­? Pisces and Persea -­? that we have designed to take advantage of the information that is inherent in the social network structure. Pisces is a system for enhancing anonymity in peer-­?to-­?peer (P2P) anonymity system designs. An anonymity system, such as the popular Tor network, helps protect your privacy on the Internet and enables people in countries like Syria to get around Internet censorship. In Pisces, we route our anonymity paths through users' social connections using verifiable random paths. We show that this technique provides much better privacy than prior designs in the face of strong attackers.
Persea addresses the reliability of looking up information and resources in a P2P system, such as Skype or Bittorrent. Existing systems are vulnerable to an attacker adding many malicious peer nodes, e.g. by using a botnet, and having them undermine the reliability of lookups. We propose a P2P system, Persea, based on a bootstrap tree -- essentially a social network that shows how each person entered the P2P system via a series of invitations. We embed the bootstrap tree into the identities that nodes use to locate themselves and perform lookups. We argue that this approach is more suitable to P2P systems than prior approaches and show that it provides lookup success rates at least as good as in prior work.

Biography: Matthew Wright is an associate professor at the University of Texas at Arlington. He graduated with his Ph.D from the Department of Computer Science at the University of Massachusetts in May 2005, where he earned his M.S. in 2002. His dissertation work addresses the robustness of anonymous communications. His other interests include secure and Sybil-­?resistant P2P systems, security and privacy in mobile and ubiquitous systems, and understanding the human element of security and privacy. Previously, he earned his B.S. degree in Computer Science at Harvey Mudd College. He is a recipient of the NSF CAREER Award and the Outstanding Paper Award at the 2002 Symposium on Network and Distributed System Security.

State of Practice of Integration, Test and Evaluation in the Department of Defense and Industry
Thursday, June 13, 2013
Tom Wissink

Read More

Hide

Abstract: This presentation will provide information on the current state of integration, test and evaluation (IT&E) practices in the aerospace industry and within the Department of Defense. This will include discussing integration, testing and evaluation at many different levels within the solutions the DoD procures. This will also include high level discussions on cyber security, test automation, IT&E practices, model-based systems development and scientific testing and analysis techniques.

Biography: Tom Wissink has worked for Lockheed Martin (LM) developing, testing and managing software intensive systems for over 35 years. He is a Lockheed Martin Senior Fellow, and in January 2010, became the LM Corporate Director of Integration, Test & Evaluation. He has worked on programs like the Space Shuttle, the Air Forces’ Satellite Command and Control Centers, the Global Positioning System and the Hubble Telescope Project. Tom is a member of the National Defense Industrial Association (NDIA) and is the Industry chair for the Industrial Committee on Test & Evaluation (ICOTE). He has been a presenter at the Aerospace Testing Seminar, the NDIA Systems Engineering and Test & Evaluation Conferences and a Keynote Speaker at STAREAST and STARWEST. He has a Bachelors degree in Computer Science from Florida Atlantic University.

High-throughput Bioimage Informatics for Neuroscience
Friday, May 03, 2013
Hanchuan Peng

Read More

Hide

Abstract: In recent years, high-throughput phenotype screening that involves systematic analysis of microscopic images (thus called "Bioimage Informatics") and other types of data has become more and more prevailing and promising. Here I will discuss several examples on how to develop a pipeline of tools to understand the structures of a complicated fruit fly's brain, and scale-up the high-throughput analysis for a variety of other biological applications (e.g. for mouse and dragonfly). If time permits, I will also discuss our high-performance image visualization and computing platform, Vaa3D (http://vaa3d.org), that has been used in several challenging high-throughput bioimage informatics applications, and my recent work on fast 3D microscopic smart-imaging system for neuroscience studies.

Biography: Dr. Hanchuan Peng just joined the Allen Institute for Brain Science in Sept 2012 to build a computational neuroanatomy and smart imaging group for mammalian brains. Before that he was the head of a research lab at Janelia Farm Research Campus, Howard Hughes Medical Institute. He has also conducted research at the Lawrence Berkeley National Laboratory, UC Berkeley, on computational biology, bioinformatics, and high-performance data mining especially gene expression analysis, and Johns Hopkins University Medical School on human brain imaging and analysis. Dr. Peng is currently interested in brain networks and connectomes, bioimage analysis and large-scale informatics, as well as computational biology. His recent work has been focusing on developing novel algorithms for 3D+ image analysis and data mining, building single-neuron whole-brain level 3D digital atlases for model animals, and Vaa3D, which is a high-performance visualization-assisted analysis system for large 3D+ biological and biomedical-image data sets. He founded the annual meetings on Bioimage Informatics, and is currently on the editorial board of Bioinformatics and also serves as a Section Editor of BMC Bioinformatics.

History Repeats Itself: Sensible and NonsenSQL Aspects of the NoSQL Hoopla
Thursday, May 02, 2013
C. Mohan

Read More

Hide

Abstract: In this talk, I will describe some of the recent developments in the database management area, in particular the NoSQL phenomenon and the hoopla associated with it. The goal of the talk is not to do an exhaustive survey of NoSQL systems. The aim is to do a broad brush analysis of what these developments mean - the good and the bad aspects! Based on my more than three decades of database systems work in the research and product arenas, I will outline what are many of the pitfalls to avoid since there is currently a mad rush to develop and adopt a plethora of NoSQL systems in a segment of the IT population, including the research community. In rushing to develop these systems to overcome some of the shortcomings of the relational systems, many good principles of the latter, which go beyond the relational model and the SQL language, have been left by the wayside. Now many of the features that were initially discarded as unnecessary in the NoSQL systems are being brought in, but unfortunately in ad hoc ways. Hopefully, the lessons learnt over three decades with relational and other systems would not go to waste and we wouldn?t let history repeat itself with respect to simple minded approaches leading to enormous pain later on for developers as well as users of the NoSQL systems! This talk was delivered as an Invited Keynote at the 16th International Conference in Extending Database Technology in Genoa (Italy) in March 2013. The basis for it is the paper at http://bit.ly/NoSQLp .

Biography: Dr. C. Mohan has been an IBM researcher for 30 years in the information management area, impacting numerous IBM and non-IBM products, the research community and standards, especially with his invention of the ARIES family of locking and recovery algorithms, and the Presumed Abort commit protocol. This IBM, ACM and IEEE Fellow has also served as the IBM India Chief Scientist. In addition to receiving the ACM SIGMOD Innovation Award, the VLDB 10 Year Best Paper Award and numerous IBM awards, he has been elected to the US and Indian National Academies of Engineering, and has been named an IBM Master Inventor. This distinguished alumnus of IIT Madras received his PhD at the University of Texas at Austin. He is an inventor of 38 patents. He serves on the advisory board of IEEE Spectrum and on the IBM Software Group Architecture Board?s Council. More information can be found in his home page at http://bit.ly/CMohan.

HORNS - A Novel Homomorphic Encryption System
Friday, April 26, 2013
Mahadevan Gomathisankaran

Read More

Hide

Abstract: Homomorphic encryption has been studied for a long time. A homomorphic encryption system allows one to perform the computations on encrypted data thus enabling delegations of computations to a untrusted entity without the loss of privacy. The recent paradigm of cloud computing, which aggregates the computing, storage and network resources, makes such an encryption system all the more necessary to preserve privacy on the cloud. Rivest, Adleman, and Dertouzos introduced this notion and recently Gentry proposed a homomorphic encryption system. While Gentry's scheme is semantically secure but it is not practical. In this paper we propose a practical homomorphic encryption system which overcomes the drawbacks of Gentry's scheme.

Biography: Mahadevan Gomathisankaran is an assistant professor in computer science and engineering at the University of North Texas. He received his Ph.D. degree in computer engineering from Iowa State University. He is the recipient of IBM Ph.D. Fellowship award for the academic years 2004 and 2005. Mahadevan is interested in building secure computing systems. Towards that goal he has designed various cryptographic functions that achieve the required security with minimal circuit complexity, proposed new secure processor architecture that root the security in the hardware, and designed a testing framework that can test the security of the systems.

Usability and Functional Testing of Mobile Apps
Friday, April 12, 2013
Guanling Chen

Read More

Hide

Abstract: The usability of mobile apps is critical for their adoption particularly because of the relatively small screen and awkward (sometimes virtual) keyboard, despite the recent advances of smartphones. Traditional laboratory-based usability testing is often tedious, expensive, and does not re?ect real use cases.

In this talk, I will describe a toolkit that embeds into mobile apps the ability to automatically collect user interface (UI) events as the user interacts with the applications. The events are ?ne-grained and useful for quanti?ed usability analysis. We have implemented the toolkit on Android devices and we evaluated the toolkit with a real deployed Android application by comparing event analysis (state-machine based) with traditional laboratory testing (expert based). The results show that the proposed toolkit is e?ective at capturing detailed UI events for accurate usability analysis.

In the second part of the talk, I will present another toolkit we recently developed that can conduct functional testing based on automated GUI ripping and model construction. This toolkit uses a coarse-grained GUI modeling technique to prevent potential state explosion and allows meaningful aggregation of models obtained in different contextual situations. Real-world case studies show that the proposed approach is effective and efficient for functional testing of Android apps.

Biography: Guanling Chen is an Associate Professor of Computer Science at University of Massachusetts Lowell. His research areas include mobile computing, ubiquitous & pervasive computing. He is also an Affiliate Faculty at Institute for Security, Technology, and Society (ISTS) at Dartmouth College. After receiving his B.S. in Computer Science from Nanjing University in 1997, he completed his Ph.D. in Computer Science from Dartmouth College in 2004. He was an I3P Fellow before he joined the faculty of UMass Lowell in 2005.

Securing Binary Software through Retrofitting
Friday, April 05, 2013
Kevin Hamlen

Read More

Hide

Abstract: Most large, security-sensitive software systems inevitably include at least some commodity, closed-source components or applications. Such components often defy traditional security audits; they may consist of many megabytes of binary code developed by various organizations across numerous countries and using myriad diverse languages and tools. Few code analyses are applicable to software of such complexity without additional source information that developers are unwilling or unable to disclose.

To address this longstanding challenge, Dr. Hamlen's research advances a different approach that performs automated binary transformation of untrusted software rather than mere analysis. In this talk, he will present his recent work on binary code-transformation algorithms that automatically retrofit untrusted software products with security. The transformations are carefully crafted to succeed even when code analyses cannot determine exactly how the original code works or what it does. Once transformed, the new code becomes amenable to formal, automated, security verification, thereby offering exceptional assurance to end-users despite the original code's untrustworthy provenance.

Biography: Dr. Kevin Hamlen is an Associate Professor in the Computer Science Department at The University of Texas at Dallas. His research applies and extends compiler theory, functional and logic programming, and automated program analysis technologies toward the development of scientifically rigorous software security systems. Over the past five years his work has received over $5 million in federally funded research awards, including Career awards from both the National Science Foundation and the Air Force Office of Scientific Research. His most recent research on secure binary retrofitting and reactively adaptive malware received three best paper awards in 2011-2012, and has been featured in thousands of news stories worldwide, including The Economist and New Scientist. Dr. Hamlen received his Ph.D. and M.S. degrees from Cornell University, and his B.S. from Carnegie Mellon University, where his work on proof-carrying code garnered the Allen Newell Award for Excellence in Undergraduate Research.

New Systematic Software Testing Techniques
Friday, March 29, 2013
Renee Bryce

Read More

Hide

Abstract: Software systems can be large and exhaustive testing is usually not feasible. Products released with inadequate testing can cause bodily harm, result in large economic losses, and affect the quality of day-to-day life. The National Institute for Standards and Technology (NIST) reports that software defects cost the U.S. economy close to $60 billion a year. This estimate can not include or measure additional costs from catastrophic failure and loss of life from safety critical software. Software testers often intuitively test for defects that they anticipate while less foreseen defects are overlooked. My research applies combinatorial testing strategies that may offset some degree of human bias. In this talk, I will review combinatorial testing and test suite prioritization. I will discuss my previous work that develops algorithms combinatorial testing. I will then present my more recent contributions in this area, including prioritizing test suites for GUI and web applications using combinatorial-based criteria.

Biography: Renee Bryce earned her Ph.D. in Computer Science from Arizona State University in May 2006. She earned her B.S. (1999) and M.S. (2000) degrees from Rensselaer Polytechnic Institute. Her research areas interests include Software Engineering, with emphasis on software testing. Renee served as a full-time lecturer of Computer Science at Arizona State University from 2002-2006 and received the department's "Instructor of the Year" award twice during this time. She is also the recipient of the Arizona State Commission on the Status of Women award for her "achievements and contributions towards advancing the status of women". Renee is currently an Associate Professor at the University of North Texas. She has served as Primary Investigator on funding from the National Science Foundation, National Institute of Standards and Technology, U.S. Forest Service, Computing Research Association ? Women, Lawrence Livermore National Lab, and the USU Center for Women and Gender.

Medical Robotics and Computer-Integrated Interventional Medicine
Friday, March 08, 2013
Russell Taylor

Read More

Hide

Abstract: This talk will discuss ongoing research at the JHU Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST ERC) to develop CIIS systems that combine innovative algorithms, robotic devices, imaging systems, sensors, and human-machine interfaces to work cooperatively with surgeons in the planning and execution of surgery and other interventional procedures. This talk will describe past and emerging research themes and illustrate them with examples drawn from our current research activities in medical robotics and computer-integrated interventional systems.

Biography: Russell H. Taylor received his Ph.D. in Computer Science from Stanford in 1976. He joined IBM Research in 1976, where he developed the AML robot language and managed the Automation Technology Department and (later) the Computer-Assisted Surgery Group before moving in 1995 to Johns Hopkins, where he is the John C. Malone Professor of Computer Science with joint appointments in Mechanical Engineering, Radiology, and Surgery and is also Director of the Engineering Research Center for Computer-Integrated Surgical Systems and Technology (CISST ERC). He is the author of over 300 peer-reviewed publications, a Fellow of the IEEE, of the AIMBE, of the MICCAI Society, and of the Engineering School of the University of Tokyo. He is also a recipient of numerous awards, including the IEEE Robotics Pioneer Award, the MICCAI Society Enduring Impact Award, and the Maurice M?ller Award for Excellence in Computer-Assisted Orthopaedic Surgery.

The MIMONet Testbed: GNU Radio vs. Matlab Cross-Validation
Friday, March 01, 2013
Vanessa Gardellin

Read More

Hide

Abstract: The increasing use of wireless technologies and the consequent increasing wireless traffic demand pose the problem of efficient bandwidth utilization at the forefront. Advanced wireless communication techniques such as cognitive radios and multi antenna systems are then expected to be increasingly used by wireless network designers to improve bandwidth utilization. The focus here is on multiple-input multiple-output (MIMO) networks and on the development of the MIMONet project, a testbed settled at the Institute of Informatics and Telematics in Pisa, Italy. The MIMONet testbed is a software designed radio platform for network-level exploitation of MIMO technology. To assess the platform, two different implementations of an OFDM transceiver are used: one based on Matlab, the other on the GNU Radio framework. The performance of the two implementations are cross validated and considered against theoretical predictions by means of extensive measurements using a fine grained signal-to-noise ratio and bit error rate estimation methodologies. When collectively considered, the results of the MIMONet project promote it as the first software designed radio testbed with carefully validated performance.

Biography: Dr. Vanessa Gardellin is a researcher at the Institute for Informatics and Telematics, Italian National Research Council in Pisa, Italy. She received her Ph.D. from the Department of Information Engineering of the University of Pisa, Italy in 2011, and her Master?s Degree from the same Department in 2007. Vanessa was a visiting researcher to the CReWMaN Lab from January 2009 to March 2010 under the supervision of Prof. Sajal K. Das. Her research activities include several areas, design and performance evaluation of multiple access protocols for wireless networks and quality of service. She is the co-author of several papers published in International conferences and journals on channel resource allocation in MIMO, mesh and cognitive networks. Vanessa serves as a member of the technical program committee and reviewer for several conferences and journals.

Multi-source Data Collection and Analysis to Support Smart Health and Well-being
Friday, February 22, 2013
Vangelis Metsis

Read More

Hide

Abstract: Smart health and well-being is an area of research that seeks to transform the healthcare system from the traditional reactive and hospital-centered approach to a preventive, proactive and person-centered approach with focus on well-being rather than the disease. Towards the achievement of that goal, we face serious challenges in data collection, handling, modeling, and analysis. Traditionally, the analysis of different aspects of human well-being derives from a variety of non-interrelated methods which has made it difficult to correlate and compare the different experimental findings for an accurate assessment of the contributing factors.
This presentation describes new methods that enable more accurate and efficient multimodal data analysis of Human-Centered computing applications in order to improve decision-making in healthcare. In particular, we present a theoretical framework for multimodal and inter-related data analysis and demonstrate different applications in cases where the purpose is to (a) monitor the cognitive and physiological condition of the human subject, and (b) to improve the quality of life through the understanding of a subject's behavior.

Biography: Dr. Vangelis Metsis is a Research Assistant Professor at the Department of Computer Science and Engineering (CSE) of the Univeristy of Texas at Arlington (UTA). Currently, he is affiliated with Heracleia Human-Centered Computing Laboratory. The Lab is hosted in the CSE department at UTA. Heracleia is a research laboratory specializing in Assistive Technologies, Medical Imaging, Bioinformatics, Sensor Networks, and Robotics. Dr. Metsis earned his B.S. degree in 2005 from the Department of Informatics of the University of Economics and Business of Athens in Greece, and his Ph.D. in 2011 from the Department of Computer Science and Engineering of the Univeristy of Texas at Arlington, under the supervision of professors Fillia Makedon and Heng Huang. During the years 2006-2007, he has worked as a Research Associate for the E.C. funded project MedIEQ at the Department of Informatics and Telecommunications of the National Center for Scientific Research (NCSR) "Demokritos", Greece.

Aviation R&D Activities at Federal Aviation Administration (FAA)
Friday, February 15, 2013
Pradip Som

Read More

Hide

Abstract: Federal Aviation Administration (FAA) designs, develops, implements, maintains, operates, and regulates the largest and the most complex Aviation System in the world. FAA not only sets the regulatory and operational standards for the U.S. National Airspace System (NAS), it sets the bar for aviation safety around the world. This talk will address the research and development activities being undertaken by FAA to reduce the collision risk in the NAS and the process of implementing proactive Safety Management System (SMS). FAA is also working on the design and implementation of the Next Generation Air Transportation System (NextGen). This talk will also discuss the paradigm shift in aviation Communication, Navigation, and Surveillance through NextGen building blocks of GPS-based Air Traffic Management instead of Radar-based surveillance, Performance Based Navigation, Trajectory Based Operation, and System Wide Information Management.

Biography: Dr. Pradip Som is the R&D Manager for the FAA Office of Safety ? Air Traffic Organization, and leads the research and development efforts in the areas of Aviation Safety Management Systems, Safety Operational Improvement, New Technology introduction, NextGen, Data Analytics, and International ATM Harmonization by interaction with EUROCONTROL and ICAO (International Civil Aviation Organization). Dr. Som Co--Chairs the Action Plan 26 - Airport Operations Harmonization Group -- with the EUROCNTROL and is an active contributor to several ICAO Panels on aviation collision risk. Dr. Som is also a contributing member to CANSO (Civil Air Navigation Services Organization) - a global ATM Organization -?‐ and leads aviation safety and performance researches. Dr. Som led the FAA Design Competition for Universities for several years where innovative ideas for solving aviation problems are encouraged. Before joining FAA, Dr. Som worked for American Airlines and US Airways for several years and was involved in the development of airlines decision support systems including Revenue Management, Flight Scheduling, and Demand Forecasting. Dr. Som possesses a Bachelors Degree in Mechanical Engineering, Masters Degree in Industrial Engineering and Ph. D in Operations Research.

Thermal Modeling and Design of Three-Dimensional Integrated Circuits: Challenges and Opportunities
Friday, February 08, 2013
Ankur Jain

Read More

Hide

Abstract: Three-dimensional integrated circuits (3D ICs) are an exciting new technology based on vertical stacking of multiple device planes. Analogous to building a skyscraper instead of growing in a suburban fashion, this approach offers several advantages, including reduced signal delay, reduced power and increased design flexibility. However, 3D IC technology also presents unique heat dissipation challenges. Removing heat from multiple device planes sandwiched between other device planes is not straightforward. Challenges and opportunities also exist in the thermal-electrical co-design and co-optimization of 3D ICs. In this talk, we will summarize our research on thermal modeling and design of 3D ICs. We will present analytical heat transfer models that predict the three-dimensional, transient temperature field in a 3D IC based on its power dissipation. These models provide a tool for accurate, run- time temperature prediction and subsequent system performance optimization. Thermal-electrical co-optimization will also be discussed. Possible future work involving thermal modeling and electrical design will be discussed.

Biography: Dr. Ankur Jain is an Assistant Professor of Mechanical Engineering at University of Texas at Arlington where he directs the Microscale Thermophysics Laboratory. His research interests include microscale thermal transport, thermal management and modeling of semiconductor devices, thermal-electrical co- optimization and nanomanufacturing. Prior to coming to UT Arlington, he worked on the research staff at Freescale Semiconductor, Molecular Imprints and Advanced Micro Devices where his research focused on three-dimensional integrated circuits (3D ICs), with specific contributions in thermal management challenges, electrical characterization and multiphysics optimization in 3D ICs. Ankur has thirteen published journal articles and over twenty peer-reviewed conference publications. He has given invited talks at a number of international conferences and workshops and serves as reviewer for several funding agencies and leading journals. His research on 3D integrated circuits is currently supported by NSF. He received his Ph.D. (2007) and M.S. (2003) from Stanford University and B. Tech. (2001) with top honors from the Indian Institute of Technology (IIT), Delhi.

Surgical Vision at the ASTRA Robotics Lab: Toward Long-term and Accurate Augmented-Reality Display for Minimally-Invasive Surgery
Friday, January 25, 2013
Gian-Luca Mariottini

Read More

Hide

Abstract: Augmented-Reality (AR) displays increase surgeon?s visual awareness of high-risk surgical targets (e.g., the location of a tumor) by accurately overlaying pre-operative radiological 3-D model onto the intra-operative laparoscopic video. Existing AR systems lack in accuracy and robustness against frequent illumination changes, camera motions, and organ occlusions, which rapidly cause the loss of image(anchor) points, and thus the loss of the AR display after a few seconds.

In this talk, I will present our recent work at the ASTRA Robotics Lab @ UTA for the design and prototype development of a new AR system, which represents the first steps toward long term and accurate augmented surgical display.

This work is also in collaboration with the Urology Dept. at UTSW. Our system can automatically recover the overlay by predicting the image locations of a high number of AR anchor points that were lost after a sudden image change. A weighted sliding-window least-squares approach is also used to increase the accuracy of the AR display over time. The effectiveness of the proposed strategy in recovering the augmentation has been tested over many real partial-nephrectomy laparascopic surgical videos from a DaVinci robot.

Biography: Gian Luca Mariottini (S'04-M'06) received the M.S. degree in Computer Engineering in 2002 and the Ph.D. degree in Robotics and Automation from the University of Siena, Italy, in 2006. In 2005 and 2007 he was a Visiting Scholar at the GRASP Lab (CIS Department, UPENN, USA) and he held postdoctoral positions at the University of Siena (2006-2007), Georgia Institute of Technology (2007-2008), and the University of Minnesota (2008-2010), USA. Since September 2010, he has been an Assistant Professor at the Department of Computer Science and Engineering, University of Texas at Arlington, Texas, USA, where he directs the ASTRA Robotics Lab. His research interests are in robotics and computer vision, with a particular focus on single- and multi-robot sensing, localization, and control, as well as on surgical vision and augmented-reality systems for minimally-invasive surgical scenarios.

Making Sense of the Big Data (Funding) Landscape
Monday, December 03, 2012
Dane Skow

Read More

Hide

Abstract: The National Science Foundation spends 100's of millions of dollars each year on cyber infrastructure and an increasing amount of that is going to the collection, storage, transport, analysis, and curation of data. Increasingly complex and massive data from all sources is on the rise with sensor networks and new digital instrument data from non-?‐Physics/Chemistry fields leading the way. The growth in quantity and quality of the data is generating an explosion of new applications tying, for example, personal health records with environmental factors and epidemiology studies, human-?‐robotic interactions to cultural studies, learning to round the clock health monitoring, etc. The need for access, transformation, and synthesis of multiple, disparate data sources by non-?‐experts is an increasing challenge to our current models of data curation and use. Dr. Skow will discuss the current portfolio of NSF data infrastructure projects and describe their interconnections and how they relate to similar efforts globally. He will also discuss the Research Data Alliance, a planned global Alliance to promote scientific data exchange between institutions around the world.

Biography: Dane Skow is currently a Program Officer for Data and Cross-?‐Directorate Activities in the National Science Foundation's Office of Cyberinfrastructure. He is also a Research Fellow at the University of Texas at Austin's Texas Advance Computation Center (TACC). Trained as a High Energy Physicist, he has worked in international distributed computing, grid computing, large scale data collection and analysis, systems design, operation, and security for the past 20 years at Argonne National Laboratory and Fermi National Accelerator Laboratory in Illinois before returning to Texas this October. Dane holds Doctorate and Masters degrees in High Energy Physics from the University of Rochester, New York and Bachelors in Physics and Math from Augustana College, Illinois.

Next-Generation Sequencing Data Analysis Pipelines for Identification of Genome-Wide Unconventional Splice Sites and Genetic Variations
Wednesday, November 28, 2012
Yongsheng Bai

Read More

Hide

Abstract: In recent years, various next-generation sequencing (NGS) technologies are assisting researchers in the identification of novel transcripts, splice junctions, and genetic variants. New sequencing technology for expressed RNA (RNA-Seq), or whole transcriptome shotgun sequencing, has improved expression profiling. The key advantage of RNA-Seq is that it provides a more comprehensive view of the transcriptome with a single experiment than microarrays, including the ability to detect splice variants, splice junctions, and completely novel transcripts. Sequencing of all the coding regions in the genome (Exome) or targeted exome capture is considered a cost-effective alternative to complete whole genome sequencing and is becoming an effective strategy to identify genetic variants including single nucleotide polymorphisms (SNPs), insertions and deletions (INDELs), and large structure variations (SVs) in genetic disease research. In this talk, I will present several novel NGS bioinformatics analysis algorithms/pipelines that we have developed recently.

The first algorithm that we developed was termed "Read Split Walk" (RSW) to identify non-canonical splicing regions using RNA-Seq data and applied it to ER stress-induced Ire1α heterozygote and knockout mouse embryonic fibroblast cell lines with the aim of identifying additional IRE1α targets. Proof of principle came in our results by the fact that the 26bp non-conventional splice site in Xbp1 was detected as the top hit by our RSW pipeline in heterozygote samples from both treatment cases but never in the negative control Ire1α knock-out samples. We have compared the Xbp1 results from our approach with results using the alignment program Exonerate and the Unix ?grep? command. We conclude that our RSW pipeline is practical and complete in identifying novel splice junction sites on a genome-wide level. We believe our pipeline can detect novel spliced sites in RNA-Seq data generated under similar conditions for other species.

Many NGS analysis tools focusing on read alignment and variant calling functions for exome sequencing data have been developed in recent years. However, publicly available tools dealing with the downstream analysis of genome-wide variants are fewer and have limited functionality. To fulfill this goal we developed SNPAAMapper, a novel single nucleotide polymorphism (SNP) variant analysis pipeline that can effectively classify variants by region (e.g. exon, intron, etc.), predict amino acid change type (e.g. synonymous, non-synonymous mutation, etc.), and prioritize mutation effects (e.g. CDS versus 5?UTR, etc.). In addition, we have also developed a pipeline that can accurately discover structure variations (SVs) in individual whole genome sequences using the strengths of existing approaches/methods.

Biography: Dr. Yongsheng Bai received his Ph.D. in Quantitative Biology from The University of Texas at Arlington in 2007. After graduation, Dr. Bai has worked as an independent senior bioinformatics research scientist for the Human Genome Sequencing Center at Baylor College of Medicine before he joined the University of Michigan. Currently he is a senior research scientist in the Department of Computational Medicine and Bioinformatics, University of Michigan and an adjunct faculty member in the Biology Department at Eastern Michigan University. Dr. Bai has published his research work in many scientific journals and conferences. His current research interests lie in the development and refinement of bioinformatics algorithms/software and databases on NGS data, bioinformatics analysis of clinical data, as well as other topics including, but not limited to, uncovering disease genes and variants using informatics approaches, computational analysis of cis-regulation and comparative motif finding, large-scale genome annotation and comparative genomics.

Challenges in Building a Swarm of Robotic Bees
Wednesday, November 21, 2012
Karthik Dantu

Read More

Hide

Abstract: The RoboBees project is a 5-year $10M NSF Expeditions in Computing effort to building a swarm of flapping-wing micro-aerial vehicles (MAV). Each MAV is projected to be 1g in weight, run on about 500 mw of power, and be about 3 cm long. A swarm of RoboBees is estimated to contain a few hundred RoboBees similar to bees in nature. There are numerous challenges in designing flapping wing vehicles at this size. These challenges are broadly divided into brain, body, and colony areas. The Brain area is to design custom low-power computing and sensing onboard along with the power electronics to drive the entire system. The Body focuses on novel actuation mechanisms, bio-mimetic wing design, as well as novel control mechanisms to control a RoboBee. The Colony effort deals with programming and coordination of a swarm of such MAVs targeting specific applications such as crop pollination and urban search-and-rescue. In this talk, I will describe some of the advances made along these lines with an emphasis on coordination of a swarm of RoboBees.

Biography: Dr. Karthik Dantu is a postdoctoral fellow in the School of Engineering and Applied Sciences at Harvard University. His interests are broadly in designing large-scale systems that combine computing, communication, sensing, and actuation such as multi-robot systems, networked embedded systems, and cyber-physical systems. As part of the RoboBees project, his work has focused on programming and coordination of swarms of MAVs. Prior to Harvard, he obtained his PhD. under the guidance of Prof. Gaurav Sukhatme in the Computer Science Dept. at University of Southern California working on various aspects of connectivity and coordination in both static and mobile sensor networks.

Cloud Computing Brought Down to Earth
Wednesday, November 14, 2012
Dave Levine

Read More

Hide

Abstract: In 1943, IBM President T.J. Watson famously predicted "there is a world market for five computers". Today, computers is everywhere. Indeed, cloud computing is making computing an utility similar to telephone or electricity: Plug into the wall and get as much computing as you need and willing to pay for.

NBC news reports Netflix, with 30 million North American streaming video subscribers uses 34% of all North American downstream internet traffic, and Zynga, with more than 300 million monthly active users, both use Amazon Cloud computing and storage facilities. For one dollar (or less) Amazon offers 50 hours of small on-demand usage. Microsoft Azure, Rackspace, Google, and many other providers have millions of compute nodes and exabytes of storage available for lease or on-demand rent. PaaS (Platform as a Service), SaaS, and others are presented as service models of cloud computing. REST interfaces, XEN virtualization, and SQL and NoSQL databases provide easy to use facilities for software engineers. In addition to such shared storage utilities as Dropbox, on demand computation allowed a mathematician two weeks ago to break Google's own internal mail key for less than $100 in cloud computation costs. Googles experiences with large, warehouse scale computing have allowed it to reach PUEs (power overhead such as cooling and voltage conversions) of 1.06 (6% overhead), which were 2.0 in decade old computer clusters. In this talk we discuss warehouse size computers, hardware, system software, software utilities, programming paradigms, and the economics of cloud computing.

Biography: See: http://cse.uta.edu/faculty/details/?id=30

Investigating the Interacting Two-Way TCP Connections over 3GPP LTE Networks
Friday, November 09, 2012
Yinsheng Xu

Read More

Hide

Abstract: This talk discusses the interactions between two-way TCP connections in 3GPP LTE networks. In the LTE network, the two-way TCP connections share buffers over a common bottleneck, i.e., the radio access links. The behavior of the TCP connections significantly influences the others in the opposite direction. Specifically, as the radio links are asymmetric , it induces drastic interactions between the TCP connections resulting in rapid draining of the downlink buffer. The periodic idleness of the downlink link results in huge waste of the precious radio bandwidth and considerable performance degradation. We investigate these interactions from the viewpoint of the Coupled Queues and present our findings. Additionally, a simple model and solution for the problem is also presented.

Biography: Yinsheng Xu is a PhD candidate in the Department of Computer Science and Technology at Tsinghua University, China. Currently he is a visiting scholar at the Center for Research in Wireless Mobility and Networking under the supervision of Prof. Sajal K. Das at University of Texas at Arlington. He received his Bachelor degree in School of Software from Beijing Institute of Technology in 2008. His research interests include routing in the wireless sensor networks, resource scheduling and congestion control in the mobile Internet.

Secure Wireless Networks
Friday, November 02, 2012
Panagiotis Papadimitratos

Read More

Hide

Abstract: Wireless networks grew immensely over the past decades. Mobile computing is already a commodity and wireless networks are becoming increasingly versatile and present. In fact, they are gradually transforming our business and every-day lives. For example, scores of wireless interconnected devices are creating smart environments and they are becoming important parts of manufacturing, power distribution, and transportation. However, these emerging wireless technologies and novel applications raise new security concerns. Open, volatile and resource-limited wireless networks make security a hard, multifaceted problem. In this talk, we will discuss how to secure wireless networks, focusing on how to secure communication. In particular, we will look at how to prove wireless protocols secure, and how to achieve strong protection along with scalability and energy efficiency.

Biography: Panagiotis (Panos) Papadimitratos earned his Ph.D. degree from Cornell University, Ithaca, NY, in 2005. He then held positions at Virginia Tech, Blacksburg, VA, the Ecole Polytechnique Federale de Lausanne (EPFL), Lausanne, Switzerland, and the Politecnico of Torino, Italy. Panos is currently an Associate Professor with the School of Electrical Engineering at the Royal Institute of Technology (KTH), Stockholm, Sweden. His research is concerned with security of networked systems. He has published and delivered tutorials (including ones at the ACM MobiCom and the ACM CCS) and invited talks on related topics.
His web page is: http://www.ee.kth.se/~papadim/

Constraint Optimal Selection Techniques (COSTs) for Linear Programming
Wednesday, October 31, 2012
Bill Corley

Read More

Hide

Abstract: Optimization is increasingly used in computer science domains, such as data mining, mobile/tele-communications, bioinformatics, database systems, etc.
Linear programming is a major branch of optimization that models decision problems with linear cost functions and constraints. Indeed, the SIMPLEX algorithm developed in the 1940's by George Dantzig to solve linear programs was termed the algorithm that runs the world in a recent article partially available at http://www.cccblog.org/2012/08/20/the-algorithm-that-runs-the-world/. The crux of this article was that the simplex method cannot solve sufficiently fast the increasingly larger problems of today's high-speed, high-tech, ever-accelerating world where near real-time solutions are sought.

In this seminar, a newly patented solution approach called Constraint Optimal Selection Techniques, or COSTs, is described to reduce dramatically the number of calculations needed to solve large-scale linear programs with huge numbers of variables and constraints. The essential idea is that only a relatively few constraints of a linear programming problem determine the answer. Using various constraint selection metrics, COSTs determine the constraints most likely to do so before beginning the solution. A fundamental constraint selection rule is described here, a geometric interpretation given, and computational comparisons of the associated COST with existing linear programming algorithms are provided. Further developments in the method are also discussed.

Biography: Dr. Bill Corley of the IMSE Department and COSMOS research center has a B.S. in electrical engineering and an M.S. in information science from Georgia Tech, a Ph.D. in systems engineering from the University of Florida, and a Ph.D. in mathematics form UT Arlington. He has worked in the space program during the Saturn V program to land on the moon and has been a faculty member at UT Arlington since 1971. His research includes a patented new algorithm for solving linear programming much faster than current methods, game theory, multiple-objective decision making, optimization theory, network analysis, fuzzy logic, statistics, and abstract mathematics.

Optimal Distributed Scheduling for Streaming Traffic in Wireless Networks
Friday, October 19, 2012
Eylen Ekici

Read More

Hide

Abstract: In this talk, we discuss distributed resource allocation schemes for streaming traffic over wireless networks. The work is in part motivated by the extreme inadequacy of existing ad hoc WLAN protocols to support streaming traffic. We introduce a distributed cross-layer scheduling algorithm for networks with single-hop transmissions that guarantees finite buffer sizes and meet minimum utility requirements (e.g. throughput guarantees). The proposed algorithm achieves a total utility arbitrarily close to the optimal value with a tradeoff in the buffer sizes. The finite buffer property is not only important from an implementation perspective, but, along with the algorithm, also yields superior delay performance. The algorithm also results in upper-bounds on the average delay that scales inversely with the buffer size. Unlike traditional back-pressure-based optimal algorithms, our proposed algorithm does not need centralized computation and achieves fully local implementation without global message passing. Rigorous numerical and implementation results are presented to illustrate the close-to-optimal throughput and far better delay performance compared to other recent distributed algorithms.

Biography: Dr. Eylen Ekici has received his BS and MS degrees in Computer Engineering from Bogazici University, Istanbul, Turkey, in 1997 and 1998, respectively. He received his Ph.D. degree in Electrical and Computer Engineering from Georgia Institute of Technology, Atlanta, GA, in 2002. Currently, he is an associate professor in the Department of Electrical and Computer Engineering of The Ohio State University, Columbus, OH. He is an associate editor of IEEE/ACM Transactions on Networking, Computer Networks Journal (Elsevier), and ACM Mobile Computing and Communications Review. He also served as the general co-chair of ACM MobiCom 2012. Prof. Ekici is the recipient of 2008 Lumley Research Award of the College of Engineering at OSU. Dr. Ekici's current research interests include wireless sensor networks, vehicular communication systems, and next generation wireless systems, with a focus on routing and medium access control protocols, resource management, and analysis of network architectures and protocols. He is a Senior Member of IEEE and a member of ACM.

Energy Efficient Data Collection in Wireless Sensor Networks
Friday, October 12, 2012
Francesco Restuccia

Read More

Hide

Abstract: Wireless Sensor Networks (WSNs) are used as an effective solution for a wide range of industrial and real-life applications, including monitoring, event detection, and target tracking. Since sensor nodes are tiny, energy-constrained devices, reducing their energy consumption has become of fundamental importance in WSNs. In order to maximize network lifetime and reduce economic costs, recent studies have demonstrated that the use of Mobile Elements (MEs) can be an efficient solution for data collection in WSNs. However, unless the ME mobility is predictable, the sensor node has to discover the presence of the ME in the surrounding area before they start exchanging data with it. This talk will discuss diverse approaches to energy efficient ME discovery in WSNs, and in particular, the analysis of an easy-to-implement, hierarchical discovery protocol will be detailed. Furthermore, some insights on the IEEE 802.15.4 MAC protocol will also be discussed.

Biography: Francesco Restuccia is a first year Ph.D. student under the supervision of Prof. Dr. Sajal K. Das in the Department of Computer Science and Engineering at UTA. He is a member of the Center for Research in Wireless Mobility and Networking (CReWMaN). He obtained his M.S. and B.S. degree in Computer Engineering from the University of Pisa, Italy. Before joining UTA, he has been a Research Assistant at IIT-CNR, Pisa, Italy. His main research interests lie in the area of wireless and sensor networks, energy efficiency, and complex systems modeling.

An Investigation in Multicore Scheduling for Wireless PHY Layer
Monday, October 01, 2012
Debashis Bhattacharya

Read More

Hide

Abstract: Large multi-?core systems on chips (SoCs) are a fact of life for PHY layer baseband processing. However, programming framework for such multi‐core systems continue to be surprisingly primitive. Despite significant volume of past research in MPSoC programming, most existing methodologies are not deemed appropriate for baseband processing, especially for wireless infrastructure (cell phone towers and other forms of base stations). This talk is focused on a formal scheduling methodology applicable to baseband processing, and comparing resultant schedules to commonly used simple parallelization schemes.

Biography: Dr. Debashis Bhattacharya is Director, Platform Software, at Futurewei Technologies, Inc., US subsidiary of Huawei Technologies, a global force in telecommunication including wireless infrastructure. Previously, he was co-?founder and CTO of Zenasis Technologies and co-?founder and CEO of Simbiosys Biowares. Prior to that, Dr. Bhattacharya was a member of technical staff at Texas Instruments and a Professor of Electrical Engineering at Yale University. He has a Ph.D. in Computer, Information and Control Engineering from the University of Michigan and a B.Tech. in Computer Science and Engineering from Indian Institute of Technology, Kharagpur, India. He has published widely and holds 8 patents. He has served and chaired in many prestigious conferences throughout the world.

Can You Use My Unused Storage? PeerVault: A Peer-to-Peer Platform for Reliable Data Backup
Friday, September 28, 2012
Adnan Khan

Read More

Hide

Abstract: n the last decade, large-scale peer-to-peer (P2P) systems are envisioned as a way to provide online storage service. In the existing approaches, the participating peers are required to maintain strict commitments for their online duration. On the other hand, recent results show that users participating in volunteer computing collectively exhibit certain patterns in terms of their long-term availability, a metric that denotes periodic online durations for a considerably long time interval. In this talk, a distributed P2P platform PeerVault is introduced that leverages the long-term availability of the computer users to form a reliable storage service, targeted to backup of personal data. At first the detail architecture of the proposed backup service will be discussed. After that, a distributed monitoring scheme will be described that assists PeerVault to detect peer churns, a common problem of any P2P application. Through extensive experiments on real availability traces of hundreds of thousands of hosts from the SETI@home computing project, it is shown that the proposed approach is effective in terms of availability as well as reliability.

Biography: Adnan Khan is a third year PhD student under the supervision of Prof. Sajal K. Das in the Department of Computer Science and Engineering at the University of Texas at Arlington. He is a member of the Center for Research in Wireless Mobility and Networking (CReWMaN). Before joining CReWMaN, he completed his B.Sc. from the Department of Computer Science and Engineering at Bangladesh University of Engineering and Technology (BUET). His current research interests include peer-to-peer storage system, wireless sensor network and pervasive computing.

Toward Long-term and Accurate Augmented-Reality Display for Minimally-Invasive Surgery
Wednesday, September 26, 2012
Gian-Luca Mariottini

Read More

Hide

Abstract: Augmented-Reality (AR) displays increase surgeon?s visual awareness of high-risk surgical targets (e.g., the location of a tumor) by accurately overlaying pre-operative radiological 3-D model onto the intra-operative laparoscopic video. Existing AR systems lack in accuracy and robustness against frequent illumination changes, camera motions, and organ occlusions, which rapidly cause the loss of image (anchor) points, and thus the loss of the AR display after a few seconds. In this talk, I will present our recent work at the ASTRA Robotics Lab @ UTA for the design and prototype development of a new AR system, which represents the first steps toward long term and accurate augmented surgical display. This work is also in collaboration with the Urology Dept. at UTSW. Our system can automatically recover the overlay by predicting the image locations of a high number of AR anchor points that were lost after a sudden image change. A weighted sliding-window least-squares approach is also used to increase the accuracy of the AR display over time. The effectiveness of the proposed strategy in recovering the augmentation has been tested over many real partial-nephrectomy laparascopic surgical videos from a DaVinci robot.

Biography: Gian Luca Mariottini (S'04 - M'06) received the M.S. degree in Computer Engineering in 2002 and the Ph.D. degree in Robotics and Automation from the University of Siena, Italy, in 2006. In 2005 and 2007 he was a Visiting Scholar at the GRASP Lab (CIS Department, UPENN, USA) and he held postdoctoral positions at the University of Siena (2006-2007), Georgia Institute of Technology (2007-2008), and the University of Minnesota (2008-2010), USA. Since September 2010, he has been an Assistant Professor at the Department of Computer Science and Engineering, University of Texas at Arlington, Texas, USA, where he directs the ASTRA Robotics Lab. His research interests are in robotics and computer vision, with a particular focus on single- and multi-robot sensing, localization, and control, as well as on surgical vision and augmented-reality systems for minimally-invasive surgical scenarios.

Evaluating the Effect of Noise in Complex Networks
Friday, September 21, 2012
Sanjukta Bhowmick

Read More

Hide

Abstract: Interactions among entities in large complex systems, such as those arising in biology, social science and software engineering, can be modeled as networks and analysis of the models provide insights to properties of the underlying application. As with any computations involving real world systems, network analysis is influenced by experimental conditions, subjective choices and resource limitations. In this talk, I will discuss how concepts from numerical analysis such as conditioning and stability can be extended to evaluate the effect of such noise in networks and show how these measurements can help us improve the accuracy and performance of network analysis in a variety of application domains.

Biography: Dr. Sanjukta Bhowmick is an Assistant Professor in the College of Information Science and Technology at the University of Nebraska at Omaha. She received her Ph.D. from the Pennsylvania State University. Her core research area is in high performance computing with a focus on the synergy of combinatorial and numerical methods. Her current projects focus on designing parallel, efficient and robust algorithms for analyzing large-scale dynamic networks. In particular, she is interested in evaluating the effect of experimental noise in network modelling and analysis and developing algorithms to minimize the influence of this noise.

From Soil to the Clouds - Networking in the Extremes: Underground and Airborne Sensor Networks
Friday, September 14, 2012
Mehmet Can (Jon) Vuran

Read More

Hide

Abstract: The recent developments in low-power wireless communication, distributed sensing, and networking allow sensor networks to be deployed in places where no computer has gone before. Two of these extreme scenarios will be presented in this talk: underground and airborne applications. Wireless underground sensor networks are an emerging type of sensor networks, where sensors are located under the ground and communicate through soil. Their applications involve precision agriculture, environment monitoring, and border patrol. Due to the significant impacts of the soil dynamics on communication, unique challenges exist for the development of networking solutions in this media. The recent developments in antenna design, channel modeling, underground networking, and data harvesting will be described. Nebraska Underground Sensing and Precision Agriculture Testbed at UNL and recent experiments with center pivot irrigation system deployments in this testbed will be discussed.
In the second part of the talk, the CraneTracker ? an embedded multi-modal mobile sensing platform for real-time migratory bird monitoring ? will be described. CraneTracker integrates energy harvesting, cellular and short-range communication technologies, and a multi-modal sensor suite to provide real-time location and behavioral information of Whooping Cranes ? one of the endangered species in the world. Recent developments in testing and validation will be described for the design of embedded software that will operate for years in air. The challenges and experiences in the design, implementation, and evaluation of the CraneTracker will be discussed. The talk will be concluded with a presentation of our ongoing experiment results with cranes that migrate between Wisconsin and Florida and Wisconsin and Indiana.

Biography: Dr. Mehmet Can (Jon) Vuran received his B.Sc. degree in Electrical and Electronics Engineering from Bilkent University, Ankara, Turkey in 2002. He received his M.S. and Ph.D. degrees in Electrical and Computer Engineering from the Broadband and Wireless Networking Laboratory, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA. in 2004 and 2007, respectively, under the guidance of Prof. Ian F. Akyildiz.
Currently, he is an Assistant Professor in the Department of Computer Science and Engineering at the University of Nebraska-Lincoln and the director of the Cyber-Physical Networking Laboratory. Dr. Vuran is the recipient of the NSF CAREER award in 2010 for ?Bringing Wireless Sensor Networks Underground? and a 2012 NSF Innovation Corps member. He received the Maude Hammond Fling Faculty Research Fellowship from University of Nebraska-Lincoln in 2008 and 2010 and the 2007 ECE Graduate Research Assistant Excellence Award from Georgia Institute of Technology. Dr. Vuran is the co- author of the Wireless Sensor Networks book, published by Wiley in 2010. He serves as an associate editor in Computer Networks Journal (Elsevier) and Journal of Sensors (Hindawi). His current research interests are in wireless sensor networks, underground communication and networking, cognitive radio networks, and cyber-physical networks.

Game Theory: Beyond the Nash Equilibrium
Wednesday, April 25, 2012
Bill Corley

Read More

Hide

Abstract: Game theory permeates economics, biology, sociology, and physics, among other fields. Indeed, the Nobel Prize has been awarded to six people for their research in game theory. The first was awarded to John Nash, et al, in 1992, after which he became the subject of the movie "A Beautiful Mind". Numerous computer scientists now work in this area - on computational algorithms, multi-agent systems, artificial intelligence, and applications to such fields such as computer system scheduling, etc. But still the Nash Equilibrium (NE) is the standard solution concept underlying n-person game theory. In an NE, rationality is interpreted as selfishness. But actual competitive interactions often give results differing from this assumption.

In this seminar, two new alternative approaches are introduced. After a brief introduction to game theory, the notion of a Disappointment Equilibrium (DE) is defined as a solution concept in which a mutual standoff forces cooperation that yields better outcomes for all players in many situations. Examples include the famous Prisoner?s Dilemma game and other important social interactions. The Compromise Equilibrium (CE) is next defined as an example of the new Scalar Equilibrium (SE) concept, which can be defined for any competitive situation. In particular, the CE gives cooperative results similar to the DE. However, SEs are computationally tractable, whereas both NEs and DEs become increasingly difficult to compute for n > 2. Finally, in summary, the implications of the new DE and SE to game theory and other fields are discussed.

Biography: Dr. Bill Corley of the IMSE Department and COSMOS research center has a B.S. in electrical engineering and an M.S. in information science from Georgia Tech, a Ph.D. in systems engineering from the University of Florida, and a Ph.D. in mathematics form UT Arlington. He has worked in the space program during the Saturn V program to land on the moon and has been a faculty member at UT Arlington since 1971. His research includes a patented new algorithm for solving linear programming much faster than current methods, game theory, multiple-objective decision making, optimization theory, network analysis, fuzzy logic, statistics, and abstract mathematics.

A Secure Data Aggregation based Trust Management Approach for Dealing with Untrustworthy Motes in Sensor Network
Wednesday, April 04, 2012
Sanjay Madria

Read More

Hide

Abstract: Efficient power management is vital for increasing the life of wireless sensor networks (WSN). The main reason is that the radio transmission consumes energy approximately three times comparing to other operations. Thus, techniques such as data aggregation have been widely used in WSN to preserve energy. Despite its appealing and powerful features, data aggregation requires a high level of security as tampering with aggregating data can be suspected due to small bit errors. We propose a comprehensive trust management approach deal with the potential dishonest and faulty motes in sensor networks. Unlike other trust management approaches, we take into account multiple properties in leveraging between positive trust and behavior uncertainty so as to yield the projection of trust which represents truster?s confidence in the trustee node to have the capability to complete the task. We comprehensively evaluate and compare trust management schemes in the sensing environment using TOSSim simulator. The results have shown the proposed scheme is memory efficient and provides fairly accurate results allowing sensors to appropriately adjust themselves and manage to carry out their missions during normal and extreme environment.

Biography: Kumar Madria received his Ph.D. in Computer Science from Indian Institute of Technology, Delhi, India in 1995. He is a faculty member in Department of Computer Science at the Missouri University of mobile data management, XML & Web Warehousing and Sensor computing. He won two best papers awards in IEEE conferences in 2010 and 2011. He is the co-author of a book in the area of web warehousing published by Springer in Nov 2003. He has organized International conferences (MDM, SRDS and others), workshops and presented tutorials in the areas of mobile data management and web data management. His research is supported by NSF, DOE, AFRL, ARL, Boeing and others. He has also been awarded JSPS (Japanese Society for Promotion of Science) visiting scientist fellowship in 2006 and ASEE (American Society of Engineering Education) fellowship at AFRL from 2008 to 2012. He received faculty excellence and research awards in 2007, 2009 and 2011 from his university for excellence in research, teaching and service. He is IEEE and ACM Distinguished Speaker, and IEEE Senior Member.

Putative Protein Function Prediction and Disease Sensitive Biomarker Identification via Machine Learning Techniques
Monday, March 19, 2012
Hua Wang

Read More

Hide

Abstract: A major challenge in the post-genomic era is to determine protein function on a proteomic scale, which is crucial to understand the complex mechanism of the cells. Instead of being isolated as in traditional classification problems, protein functions are highly correlated, which provides a new opportunity to improve the overall prediction accuracy by exploiting the function-function correlations. To achieve this, we place protein function prediction under the framework of multi-label classification, an emerging topic in machine learning in recent years, to learn the class memberships of each protein with respect to all the biological functional categories in an integral process, such that the function-function correlations can be elegantly incorporated into a variety of learning models.

Anticipating an organism's phenotypes based on the molecules encoded by its genome is critical for the diagnosis and intervention of major human diseases. Imaging genetics is a powerful tool to integrate phenotypes and genotypes, which is able to characterize neurodegenerative process in the progression of Alzheimer?s disease (AD) and other neurodegenerative disorders by utilizing multi-modal brain imaging and genome-wide array data. Different to traditional association studies that identify genetic and imaging biomarkers by performing univariate regression analysis, we propose novel structured sparse learning algorithms to associate genetic and imaging biomarkers to both disease progression status and symptoms, where we consider biomarkers, cognitive measures, and disease status as an integral learning target to explore the interrelated structures within and between both genetic/imaging data and clinical data, and incorporate the biological knowledge among genetic biomarkers induced from their genetic arrangement.

Biography: Hua Wang is a PhD candidate under the supervision of Dr. Heng Huang and Dr. Chris Ding in the Computer Science and Engineering Department, University of Texas at Arlington. He is a member of Computational Science Lab (CSL) at UTA. He received a B.S. degree in Electronic Engineering from Tsinghua University in 1999 and M.S. degree in Signal Processing from Nanyang Technological University in 2003. Before he came to UTA in 2007, he worked in Motorola as an embedded software engineer for four years. Hua's current research interests include bioinformatics, medical Image Analysis, computer Vision and machine Learning. During his graduate study, he has published a number of 27 academic papers in top conferences and reputable journals including 4 ICCV papers, 3 ECCV papers, 2 CVPR papers, 1 MICCAI paper, 1 NIPS paper, 1 ACM Multimedia paper, 2 IJCAI papers, 3 AAAI papers, 1 ECML/PKDD paper, 1 ICDM paper, 1 SIGIR paper, 1 CIKM paper, and 2 RECOMB papers, as well as one journal paper in Bioinformatics, two invited journal papers in Journal of Computational Biology and one journal paper in Personal and Ubiquitous Computing.

Efficient Globally-Optimal Algorithms for Subspace Learning
Wednesday, December 07, 2011
Feiping Nie

Read More

Hide

Abstract: Recently, subspace learning has emerged as one of the most powerful approach in machine learning. Subspace learning is useful for dimensionality reduction and classification, and attracted great interests of many researchers due to the simplicity and effectiveness of these algorithms. Most subspace learning algorithms can be viewed as a special case in a general graph embedding framework associated with a trace ratio optimization problem. Directly solving the trace ratio problem is difficult, traditional method usually solve an approximate problem, i.e., the ratio trace problem, which has closed form solution. In our work, we proposed an iterative method to solve the trace ratio problem directly. We proved that the iterative method will converge to the global optimum, and the convergence rate is quadratic, thus is very fast. Experimental results show that, in most cases, the performance of solving the trace ratio problem is better than those of solving the ratio trace problem. We also applied the trace ratio criterion for feature selection. Instead of the brute force search, we proposed an efficient method to find the feature subset with the globally optimal trace ratio value.

Biography: Feiping Nie received the Ph.D. degree in Tsinghua University, Beijing, China. He is currently research scientist working in CSE Dept , University of Texas. His research interests include machine learning, pattern recognition, data mining, computer vision, image processing and information retrieval.
He has published more than 80 scientific papers in top-ranked journals and conference including IEEE Trans-PAMI, IEEE Trans-IP, IEEE Trancs-NN, IEEE TKDE, IEEE Trans-VCG, IEEE Trans-CSVT, IEEE Trans-MM, Bioinformatics, Machine Learning, Pattern Recognition, NIPS, ICML, IJCAI, AAAI, CVPR, ICCV, MICCAI, SIGIR, ACM MM, and ICDE.

Embedding-Based Similarity Search in Image and Video Databases
Wednesday, November 23, 2011
Vassilis Athitsos

Read More

Hide

Abstract: Similarity-based retrieval is the task of identifying database
patterns that are the most similar to a query pattern. Retrieving
similar patterns is a necessary component of many practical
applications in computer vision and pattern recognition. At the same
time, in image and video databases, there is oftentimes a need to use
computationally expensive distance measures, such as dynamic time
warping, the chamfer distance, or shape context matching. The
computational cost of such measures can lead to retrieval times that
are too slow for practical applications. This talk presents BoostMap,
a nearest neighbor search method that is designed to speed up
similarity search when such computationally expensive distance
measures are employed. BoostMap has two key characteristics: the first
characteristic is that it maps the original nearest neighbor retrieval
problem into a much easier problem involving retrieval in a
Euclidean/vector space. The second characteristic is that this mapping
to a vector space is optimized based on distances among large numbers
of training patterns. Experimental results illustrate the advantages
of using BoostMap in several application scenarios, including hand
pose estimation, optical character recognition, and search in time
series databases.

Biography: Dr. Vassilis Athitsos Vassilis Athitsos is Assistant Professor at Computer Science and Engineering Department at University of Texas at Arlington.
His main research areas are computer vision, machine learning, and data mining. At UTA, he has established the VLM research lab. A large part of my current work focuses on developing general methods for efficient and accurate similarity-based retrieval and classification, with applications in sign language recognition, content-based access in image, video and multimedia databases, and recognition of objects and shapes.

Design, Optimization, and Performance analysis of a Hierarchical Discovery Protocol for WSNs with Mobile Elements
Tuesday, November 22, 2011
Francesco Restuccia

Read More

Hide

Abstract: Wireless Sensor Networks (WSNs) are emerging as an effective solution for a wide range of applications, especially for environmental monitoring. To this end, Mobile Elements (MEs) can be used to get data sampled by sensor nodes. One of the main challenges in this kind of networks is the energy-efficient and timely discovery of mobile nodes. In this work, we present a hierarchical discovery protocol for WSNs with MEs, namely Dual Beacon Discovery (2BD) protocol, based on two different Beacon messages emitted by the mobile node (i.e., Long-Range Beacons and Short-Range Beacons), and its analytical model, based on Discrete-time Markov Chains. Then, we perform a parameter optimization study in order to achieve a minimum energy consumption by the static sensor, with respect to specific QoS bounds. Finally, we perform a complete performance analysis of our protocol, in terms of energy spent for ME contact. We show that, with respect to a standard discovery protocol, 2BD achieves a huge energy reduction, especially when the discovery phase is long and the application imposes strict QoS bounds.

Biography: Mr. Francesco Restuccia is a research associate at the Department of Information Engineering of University of Pisa, in collaboration with IIT-CNR, under the joint supervision of Prof. Giuseppe Anastasi and Dr. Marco Conti. He earned his B.Sc and M.Sc, both "summa cum laude", from University of Pisa, in 2009 and in 2011, respectively, as a student of the "Excellence curriculum" program. His research interests lie in design, analysis, optimization, and performance evaluation of wired and wireless networks.

Salsa-ReDS: Reputation for Enhancing the Robustness of P2P Systems
Friday, October 14, 2011
Matthew Wright

Read More

Hide

Abstract: Salsa is one of several recent designs for a structured peer-to-peer (P2P) system that uses path diversity and redundancy to ensure greater robustness from attackers in the lookup process. In this talk, we first describe the Salsa architecture and discuss the general problem of distributed directory services in open systems. We then present Salsa-ReDS (Salsa with Reputation for Directory Services), a simple but powerful way to further improve the robustness of Salsa lookups. In Salsa-ReDS, each node tracks the performance of its peers in each lookup and uses that information to gauge the relative reliability of the peers for future lookups. We show in simulation that this technique can greatly reduce the chance of an attacker manipulating the lookup results or maintain the same robustness with lower overhead. We conclude by describing how the ReDS idea can also be applied to other systems and some of the potential pitfalls, challenges, and opportunities for future research in this approach.

Biography: Dr. Matthew Wright is an associate professor of Computer Science and Engineering at the University of Texas at Arlington. He graduated with his Ph.D from the Department of Computer Science at the University of Massachusetts in May, 2005, where he earned his M.S. in 2002. His dissertation work addresses the robustness of anonymous communications. His other interests include intrusion detection, security and privacy in mobile and ubiquitous systems, and the application of incentives and game theory to security and privacy problems. Previously, he earned his B.S. degree in Computer Science at Harvey Mudd College. He is a recipient of the NSF CAREER Award and the Outstanding Paper Award at the 2002 Symposium on Network and Distributed System Security.

"EWB-USA's Commitment to Equity, Economy, and Ecology"
Tuesday, October 04, 2011
Cathy Leslie

Read More

Hide

Abstract: EWB-USA supports community development programs worldwide through participating in the design and implementation of sustainable engineering projects. EWB-USA projects embed concepts of natural capitalism, financial analysis, life cycle cost analysis, and applicable and appropriate technologies. Our commitment to equity, economy, and ecology is evident in the long-term commitments to the communities in which we work.

Biography: Ms. Leslie is a licensed Civil Engineer in Colorado with over 20 years of experience in the design and management of civil engineering projects. In March, 2008, after ten years as a Civil Engineering Manager at Tetra Tech, Inc., she assumed the role of Executive Director of Engineers Without Borders-USA, a position she held on a volunteer basis for six years.

Ms. Leslie began her work in developing countries as a Peace Corps Volunteer. Stationed in Nepal, she developed solutions related to drinking water and sanitation projects. During the last 20 years, whether working in corporate engineering or nonprofit international development, Ms. Leslie has developed and utilized her technical interests in creating solutions for engineering projects that integrate the needs of the client along with the sustainable needs of the environment.

As Executive Director of EWB-USA, Ms. Leslie uses her organizational and project management skills to ensure that the volunteer organization can fulfill its mission and vision. Ms. Leslie was a part of the second project to be completed within EWB-USA, a water project in Mail, Africa. There she worked directly with the community and other volunteers to develop a rainwater catchment solution. This project introduced her to EWB-USA and ultimately led to devotion to the organization.. After six years as the volunteer Executive Director, Ms. Leslie joined EWB-USA as the second Executive Director since the organization's founding in 2002. Under Ms. Leslie's guidance, EWB-USA has received many honors and awards including most recently, the 2010 Henry C. Turner Prize.

Ms. Leslie also belongs to the American Society of Civil Engineers, American Society of Mechanical Engineers, the Water Environment Federation, and is a member of the Presidential Council of Alumnae for Michigan Technological University, where she holds her degree in civil engineering. She received the William H. Wisely Civil Engineer Award in 2008 from the American Society of Civil Engineers for her contribution to the engineering profession.

Low Energy Wireless Systems
Friday, September 30, 2011
Sudipto Chakraborty

Read More

Hide

Abstract: Low energy wireless systems have been essential for sensors, medical and consumer electronic applications in the recent years. There have been numerous developments in the areas of systems, ICs, technology platform development, and also battery technology to facilitate such developments. In this talk several fundamental system level considerations leading to the development of such systems would be reviewed and practical implementation trade-offs will be discussed. Finally, some case studies based on practical implementation would be illustrated.

Biography: Dr. Sudipto Chakraborty received his B.Tech degree from Indian Institute of Technology, Kharagpur in 1998 and Ph.D from Georgia Institute of Technology, Atlanta in 2002. He joined Texas Instruments as a wireless systems and circuit designer in 2004 where he has worked extensively in the area of advanced wireless and high speed systems leading to commercially available IC products. He has various publications in the area of integrated circuits and systems.

Register Optimization in IMS and Inter-domain Handover Scheme in Proxy Mobile IPv6
Friday, September 23, 2011
Qizhi Zhang

Read More

Hide

Abstract: The Third Generation Partnership Project (3GPP) and 3GPP2 have standardized the IP Multimedia Subsystem (IMS) to provide IP-based multimedia services for next-generation networks. IMS uses the Session Initiation Protocol (SIP) as its signal control protocol to session setup and session management. Once the User Equipment (UE) attaches the access network, it must register to IMS. So the IMS can locate the UE and facilitate session establish procedure. In the first part of this talk, we first introduce the IMS register procedure, and then propose an optimized scheme for register and re-register.

Proxy Mobile IPv6 (PMIPv6) is new solution of mobile IP. Unlike MIPv6, PMIPv6 is a network-base protocol, which means users only need to keep IPv6 protocol stack in their mobile equipments. However, PMIPv6 is a local mobile management protocol and only provides intra-domain roaming management within a PMIPv6 domain. In the second part of this talk, we will discuss two proposed schemes for inter-domain handover scheme in PMIPv6.

Biography: Dr. Qizhi Zhang is an associate professor in School of Computer, South China Normal University (SCNU). He received his B.S. and M. S. degrees in mathematics from Beijing Normal University (BNU) in 1999 and 2002 respectively, and received his Ph.D. degree in computer science from Beijing University of Posts and Telecommunications (BUPT) in 2005. Currently he is a visiting scholar in the
CReWMaN Lab under the supervision of Prof. Sajal Das. His research interests include IP Multimedia Subsystem, Mobile IP Protocol, Wireless Networks, and Mobile Intelligent Networks. Currently he focuses on the mobility management and performance optimization in these areas.

A Bayesian Network Approach to Feature Construction with regards to Classification
Wednesday, September 21, 2011
Manolis Maragoudakis

Read More

Hide

Abstract: In order to achieve better classification outcomes, many researchers focus on improving characteristics and abilities on existing Machine Learning Algorithms such as Decision Trees, Ensemble Classifiers and Support Vector Machines. However, in domains with limited numbre of input features, such techniques are prone to errors.We approach this important matter by the view of a wider encoding of the training data and more specifically under the perspective of the creation of more features so that the hidden angles of the subject areas, which model the available data, are revealed to a higher degree. We suggest the use of a novel feature construction algorithm, which is based on the ability of the Bayesian networks to re-enact the conditional independence assumptions of features, bringing forth properties concerning their interrelation that are not clear when a classifier provides the data in their initial form. The results from the increase of the features are shown through the experimental measurement in a wide domain area and after the use of a large number of classification algorithms, where the improvement of the performance of classification is evident.

Biography: Dr. Manolis Maragoudakis holds a PhD from the Department of Electrical and Computer Engineering , University of Patras and a diploma in Computer Science from the Computer Science Department, University of Crete.
The thesis was entitled "Reasoning under uncertainty in dialogue and other natural language systems using Bayesian network techniques".
He is currently a lecturer at the Department of Information and Communication Systems Engineering at the University of the Aegean with "Data Mining" as a field of expertiese.Furthermore, he is the Departmental Coordinator for the Programme: LLP/Erasmus within the University of the Aegean.
Manolis Maragoudakis is a reviewer for "IEEE Transactions on Knowledge and Data Engineering", "Knowledge-Based Systems" and "International Journal of Artificial Intelligence Tools".
He has actively supported a plethora of Artificial Intelligence and Data Mining conferences.
He is a member of the "Ai-Lab" Group, with the Department of Information and Communication Systems Engineering. Since 2001, is a member of the Hellenic Artificial Intelligence Society. He is a suporter of the actions of the Institute of Marine Conservation Archipelagos.
His research interests focu on the following thematic areas:
* Data Mining
* Privacy Preserving Data Mining
* Machine Learning
* user Modeling
* Semantic Web
* Data Bases
* Bayesian Networks

Amazon Web Services Cloud
Friday, September 16, 2011
Rasool Fakoor

Read More

Hide

Abstract: Since early 2006, Amazon Web Services (AWS) has provided companies of all sizes with an infrastructure web services platform in the cloud. AWS provides support for requisition compute power, storage, and other services–gaining access to a suite of elastic IT infrastructure services as business demands them. In addition, AWS provides flexibility to choose whichever development platform or programming model makes the most sense for the problems which want to be solved. Besides, AWS makes it possible to take advantage of Amazon.com global computing infrastructure that is the backbone of its multi-billion retail business and transactional enterprise whose scalable, reliable, and secure distributed computing infrastructure has been honed for over a decade.

In this talk, I will cover various concepts of cloud computing as well as introduce some AWS services such as Elastic Cloud Computing, Amazon Simple Storage Service, CloudWatch, Elastic Block Store, and Elastic MapReduce.

Biography: Rasool Fakoor is a Master student in the Department of Computer Science and Engineering at the University of Texas at Arlington and member of the Center for Research in Wireless Mobility and Networking (CReWMaN). He received his B.Sc. degree in Software Engineering from Azad university of Tehran,in 2006. During the summer 2011, he worked as an intern at Amazon AWS, Elastic Compute Cloud team.

An Energy Adaptation Mechanism for P2P File Sharing Applications
Friday, September 09, 2011
Mayank Raj

Read More

Hide

Abstract: Peer to peer (P2P) file sharing applications are the major constituent of the Internet traffic, and can be quite bandwidth and energy intensive. With the increased usage of P2P applications on mobile devices, its battery life has become of significant concern. In this talk, we discuss a novel mechanism of energy adaption in P2P file sharing applications. Unlike traditional energy management schemes, the proposed mechanism aims at enabling the P2P file sharing applications to take the available of energy of the device in consideration. This allows us to adapt the application and protocol behavior to ensure that the end user completes his file download before exhausting his device's battery. We discuss the implementation of the proposed mechanism in the context of BitTorrent protocol.

Biography: Mr. Mayank Raj is a PhD student in the Department of Computer Science and Engineering at the University of Texas at Arlington and a Graduate Research Assistant at the Center for Research in Wireless Mobility and Networking (CReWMaN). He received his B.E. degree in Electronics and Communication from DSCE, India in 2005 and M.Tech in Information Technology from IIIT-Bangalore, India in 2007. Prior to joining UTA, he worked as an Intern at Motorola India Research Lab. His current research interests include wireless networks, sensor networks, mobile cloud computing, and energy adaptive computing.

Alcatel-Lucent / AT&T Fall University Entrepreneurial Workshop
Friday, August 26, 2011
John Reas

Read More

Hide

Abstract: The Presentation sets the stage on the growth of North Texas and the role of entrepreneurism in the community, the May university program held here in Plano and the plans for the fall workshop, with an emphasis on the exposure that the students and their IP will get from the area business community. John Reas of Alcatel-Lucent will make the presentation.

Biography: An '83 graduate of West Point, John served as an armored officer in the 2nd Armored Division in Ft. Hood, followed by positions as a manufacturing engineer with Texas Instruments and as a materials manager with Air Systems Components before joining DSC Communications in 1995. Initially supporting the development of iMTN, a SONET cross connect platform, he went on to support the development of ADSL over an ATM platform known as Litespan, which became the platform of choice as AT&T rolled out its broadband network. After DSC was acquired by Alcatel, John lead the advanced procurement team and oversaw design to cost activities on a variety of optical network, broadband and switching platforms at Alcatel's USA design centers in Raleigh, NC, Petaluma, CA, and Plano, TX. After Alcatel merged with Lucent, John joined the IPTV deployment team and worked with AT&T in the build out of U-Verse and IP services in AT&T's markets as well as overseeing the software upgrades of several releases into their network. Last year, John joined the emerging technology and innovation team in Alcatel-Lucent's marketing organization and manages a variety of innovation and proof of concept programs in Plano. With Alcatel-Lucent's university partnership program, John is active in promoting and supporting university student entrepreneurial programs in collaboration with the AT&T Foundry in Plano.

SSL/TLS - Real-world Applications of Cryptography and PKI
Wednesday, May 04, 2011
Joshua Davies

Read More

Hide

Abstract: Executing secure exchanges of information over public networks such as the Internet is as difficult as it is important. The SSL/TLS protocols were designed to provide a safe, standard, peer-reviewed architecture for transparent negotiation of secure channels. However, although SSL/TLS, when followed and implemented correctly, provide safety against many known attacks, improper implementation can lead to exploitable security holes. This talk will discuss the state of the art in symmetric cryptography, public-key cryptography, digital
signatures, public-key-infrastructures and how SSL/TLS ties them all together to create a standardized security layer intended to operate independent of the underlying protocol. The focus will be on real-world interoperability and safety considerations.

Biography: Joshua Davies is the director of architecture at 2xoffice.com, and was the principal security architect at Travelocity.com for 10 years prior. He holds a Bachelor's degree in Computer Science from Valdosta State University and a Master's degree in Computer Science from the University of Texas at Arlington, where he did his thesis work in mobile robotics localization. He is the
author of the book "Implementing SSL/TLS Using Cryptography and PKI" and has co-authored a paper titled "Use of RSSI and Time-of-Flight Wireless Signal Characteristics for Location Tracking" with Drs. Farhad Kamangar, Gergely Zaruba, Manfred Huber and Vassilis Athitsos to be presented at the upcoming PETRA 2011 conference on pervasive technologies for assistive environments.

Energy Adaptive Computing for Mobile Devices
Friday, April 22, 2011
Mayank Raj

Read More

Hide

Abstract: Energy is a valuable commodity, especially for wireless mobile devices. In recent times there has been a considerable interest in the energy efficient design of networks, protocols, and systems. However, most of the current approaches aim to reduce the device energy (battery) consumption but not necessarily ensure the completion of user activities within specified energy constraints. For example, a user may want to download a file from the network; can he complete the download before exhausting the battery of his wireless device? Existing energy conservation approaches make a best effort towards achieving this goal.

In this talk we will discuss the emerging field of energy adaptive computing taking user's available energy into consideration. The goal is to develop a model that adapts the behavior of the applications and network protocols in order to complete user activities within specified energy. We will show how to apply our energy adaptive mechanism to various network scenarios including P2P file-sharing.

Biography: Mr. Mayank Raj is a PhD student in the Department of Computer Science and Engineering at the University of Texas at Arlington and a Graduate Research Assistant at the Center for Research in Wireless Mobility and Networking (CReWMaN). He received his B.E. degree in Electronics and Communication from DSCE, India in 2005 and M.Tech in Information Technology from IIIT-Bangalore, India in 2007. Prior to joining UTA, he worked as an Intern at Motorola India Research Lab. His current research interests include wireless networks, sensor networks, mobile cloud computing and energy adaptive computing.

Crossing the Memory Wall: The Application-Specific Approach with Prefetching
Wednesday, April 20, 2011
Xian-He Sun

Read More

Hide

Abstract: Data access is a known bottleneck of high performance computing (HPC). The prime sources of this bottleneck are the performance gap between the processor and disk storage and the large memory requirements of ever-hungry applications. Although advanced memory hierarchies and parallel file systems have been developed in recent years, they only provide high bandwidth for contiguous, well-formed data streams, performing poorly in serving small and noncontiguous data requests. Unfortunately, many HPC applications make a large number of requests for small and noncontiguous pieces of data, as do high-level I/O libraries such as HDF-5. The problematic data-access wall remains after years of study and, in fact, is becoming probably the most notorious bottleneck of HPC. We propose a new dynamic application-specific I/O architecture for HPC. Unlike traditional I/O designs where data is stored and retrieved by request, our architecture is based on a novel “Server-Push” model in which a data access server proactively pushes data from a file server to the memory and makes smart decision on data layout based on the data access pattern of the underlying application. Here dynamic means that the data layout and prefetching mechanisms can be changed dynamically between applications and even within one application. In this talk, we present the design consideration and implementation results under MPICH2 and PVFS of the dynamic application-specific approach. We also discuss the possible hardware support to extend the server-push architecture to cache and memory data access.

Biography: Dr. Xian-He Sun is the chairman and a professor of the Department of Computer Science, the director of the Scalable Computing Software laboratory at the Illinois Institute of Technology (IIT) and a guest faculty in the Mathematics and Computer Science Division at the Argonne National Laboratory. Before joining IIT, he worked at DoE Ames National Laboratory, at ICASE, NASA Langley Research Center, at Louisiana State University, Baton Rouge, and was an ASEE fellow at Navy Research Laboratories. Dr. Sun's research interests include parallel and distributed processing, high-end computing, software systems, and performance evaluation. He has close to 200 publications and 4 patents in these areas. More information about Dr. Sun can be found at his web site www.cs.iit.edu/~sun/

Can Online Social Networks Facilitate Community Detection?
Friday, April 08, 2011
Na Li

Read More

Hide

Abstract: Recent years have seen many research activities on community detection, ranging from detecting a community by simply connecting a set of targeted people, to seeking a more complicated community of a specific size or density. To date, most of those community detection techniques can be used only if the entire graph is available for discovering communities. Such strong limitation makes it feasible for only an online social site owner to conduct community detection, as others are not able to view the full picture of the web 2.0 based online social networks. In this paper, we will mainly address how to leverage the knowledge locally available on the online social networks (OSNs), the list of friends on individual’s web page for example, to detect minimum community. We will see even such local knowledge is not without a cost. Therefore, we propose a heuristic algorithm to efficiently detect the minimum community including a group of targeted users on OSNs with minimum cost. In fact, even if the entire OSN graph is given, the minimum community detection problem is NP-hard, let alone the local-view based detection. Since our algorithm is particularly based on topology properties of social networks, it shows pretty good performance in our experiments over real-world social network data sets.

Biography: Ms. Na Li is a 4th year Ph.D. student in Computer Science and Engineering Department at the University of Texas at Arlington and a research assistant in the Center for Research in Wireless Mobility and Networking (CReWMaN). She received her B.S. degree in Computer Science and Technology from Nankai University, Tianjin, China, in 2005. From 2005 to 2007, She was a research assistant in Computer Network Information Center, Chinese Academy of Sciences, Beijing, China. She’s a member of IEEE and ACM. Her current research interests include privacy and security issues in challenging networks, like Wireless Sensor Networks (WSNs), Opportunistic Networks (OppNets) and Online Social Networks (OSNs), in particular preserving location privacy of data source and sink in WSNs, defending against selfish/malicious data forwarding in OppNets, and protecting relationship privacy in publishing OSN data. Additionally, she is also dedicated to mobile social computing, designing social-aware information sharing protocols.

Department Colloquium: Optimizing Performance of Cache Memory Systems in Multicore Processors
Friday, April 01, 2011
Krishna Kavi

Read More

Hide

Abstract: This talk focuses on techniques to improve cache memory performance in multicore processors. It is an understatement to say that the performance of multicore systems is limited by their memory systemsÕ performance. Our research has developed both hardware and software solutions to improve the performance of (L1 and L2) cache memories. Software solutions include profiling of data access patterns, relocating data, and restructuring code to improve performance. Hardware solutions include customizing cache address mapping (or indexing) for different threads and/or different objects within an application, and the simultaneous existence of multiple address mappings.

We are developing a program analysis tool that helps with our hardware and software solutions. Gleipnir is built on top of a widely used program analysis tool called Valgrind. When fully developed, Gleipnir can be used to obtain very fine grained information with each memory access, including the program variable associated with that access, the function and thread that caused the access.

Localities exhibited by data depend on object types and how they are accessed in an application. Better performance can be achieved by spreading data accessed by applications more uniformly across the cache and minimize cache conflicts. Code and data restructuring techniques that rely on profiled information on data accesses can minimize conflict misses and improve uniformity of cache accesses.

Uniformity of accesses can also be achieved using custom indexing for each application. We are also investigating the use of multiple indexing schemes (or multiple decoders) with cache memories. Performance can also be improved if cache memories are partitioned and reconfigured optimally to meet divergent needs of data types and access patterns. Combining data and code restructuring with reconfigurable caches can lead to even better performance.

Biography: Dr. Krishna Kavi is currently a Professor of Computer Science and Engineering and the Director of the NSF Industry/University Cooperative Research Center for Net-Centric Software and Systems at the University of North Texas. During 2001-2009, he served as the Chair of the department. He also held an Endowed Chair Professorship in Computer Engineering at the University of Alabama in Huntsville, and served on the faculty of the University Texas at Arlington. He was a Scientific Program Manger at US National Science Foundation during
1993-1995. He served on several editorial boards and program committees.

His research is primarily on Computer Systems Architecture including multi-threaded and multi-core processors, cache memories and hardware assisted memory managers. He also conducted research in the area of formal methods, parallel processing, and real-time systems. He published more than 150 technical papers in these areas. He received more than US $4.5 M in research grants. He graduated 12 PhDs and more than 35 MS students. He received his PhD from Southern Methodist University in Dallas Texas and a BS in EE from the Indian Institute of Science in Bangalore, India.

Streaming Data Dissemination in Multi-hop Cluster-based Wireless Sensor Networks with Mobile Sinks
Friday, March 25, 2011
Long Cheng

Read More

Hide

Abstract: It has been shown that sink mobility provides an energy-efficient approach to data dissemination in wireless sensor networks (WSNs). Most of the approaches targeted to WSNs with mobile sinks (MSs) addressed the problem of data dissemination where only a few messages are reported during a long timeframe. However, dissemination of streaming data is becoming relevant in WSNs, as more and more multimedia sensor nodes – equipped with image, audio, and video capabilities – are being used to characterize the sensing environment. In this scenario, a sequence of messages propagates into the network, hence the problem of finding an effective routing path for disseminating data to MSs becomes even more challenging, since the communication overhead for reaching the MS might also be significant.

In this talk, we present an energy-efficient streaming data dissemination (SDD) protocol for cluster-based WSNs with MSs. For the energy efficient on-demand route discovery, we design a heuristic broadcasting over Cluster Heads protocol for multihop cluster-based WSNs, where a direct link between CHs is not necessarily available. By introducing a cross-clusters handover mechanism and a path redirection scheme, SDD maintains the end-to-end connectivity between the source and the MS, while avoiding the constant transmission of the MS location as it moves across multiple clusters. We evaluate the performance of the proposed SDD protocol, and compare it with a hierarchical cluster-based data dissemination protocol. Simulation results demonstrate its effectiveness, in terms of both end-to-end delivery delay and energy-efficiency.

Biography: Mr. Long Cheng is a PhD student in the State Key Lab of Network and Switching Technology, Beijing University of Posts and Telecommunications since August 2007. He received his B.S. degree in Computer Science from Xi’an Telecommunication Institute, China, 2004, and M.S. degree in Telecommunication Engineering from XiDian University, China, 2007. During the period from June 2009 to December 2009, he studied in the Department of Computing, Hong Kong Polytechnic University as a research assistant. From January 2010, he is a visiting PhD scholar in the CReWMaN Lab under the supervision of Prof. Sajal Das. His main research interests cover wireless sensor networks, Internet of Things, mobile computing, and pervasive computing.

The Space Shuttle - Early Dreams
Friday, March 25, 2011
Hans Mark

Read More

Hide

Abstract: The most complex machine ever built, the space shuttle has more than 2.5 million parts, including almost 370 kilometers (230 miles) of wire, more than 1,060 plumbing valves and connections, over 1,440 circuit breakers, and more than 27,000 insulating tiles and thermal blankets. While serving as Undersecretary of the U.S. Air Force, Secretary of the U.S. Air Force, and Deputy Administrator of NASA, Dr. Mark had the opportunity to make many important decisions that shaped the development of the space shuttle and the US space flight program at the most crucial time in its history. Even since his departure from NASA in 1984, Dr. Mark has continued to have an influence on the path that the US space program has taken. In this talk he will discuss the critical decisions that led to the development of the Space Shuttle, about many of the tough and joyous times in its history, and about the future of the US Space Program after the Space Shuttle retires later this year.

Biography: Dr. Mark specializes in the study of spacecraft and aircraft design, electromagnetic rail guns, and national defense policy. He has served on the faculty of the Cockrell School of Engineering since 1988. He served as chancellor of The University of Texas System from 1984 to 1992. He previously taught at Boston University, Massachusetts Institute of Technology, University of California at Berkeley, and Stanford University. Dr. Mark has served as director of the NASA-Ames Research Center, Secretary of the Air Force, deputy administrator of NASA and most recently, the Director of Defense Research and Engineering. He has published more than 180 technical reports and authored or edited eight books. Dr. Mark is a member of the National Academy of Engineering and an Honorary Fellow of the American Institute of Aeronautics and Astronautics. He is the recipient of the 1999 Joe J. King Engineering Achievement Award and the 1999 George E. Haddaway Medal for Achievement in Aviation. He holds six honorary doctorates.

Comparative Analysis of Biological Networks Using Markov Chains and Hidden Markov Models
Thursday, March 24, 2011
Byung-Jun Yoon

Read More

Hide

Abstract: Recent advances in high-throughput experimental
techniques for measuring molecular interactions have enabled the systematic study of biological interactions on a global scale. Comparative analysis of genome-scale interaction networks can lead to important insights into the functional organization of cells and their regulatory mechanisms. In this talk, we will introduce the concept of comparative network analysis, discuss their significance in biomedical research, and review mathematical models and algorithms that can be used for comparing biological networks. Especially, we will focus on hidden Markov models (HMMs) and Markov chains, which have been widely used in various engineering fields, and show how these models can be used for efficient comparative analysis of large-scale networks.

Biography: Dr. Byung-Jun Yoon received the B.S.E. (summa cum laude) degree from Seoul National University (SNU), Seoul, Korea in 1998 and the M.S. and Ph.D. degrees from California Institute of Technology (Caltech), Pasadena, CA, in 2002 and 2007, respectively, all in electrical engineering. He was a postdoctoral researcher at Caltech from Dec. 2006 to Oct. 2007. In 2008, he joined the Department of Electrical and Computer Engineering at the Texas A&M University, College Station, TX, where he is currently an assistant professor. His main research interest is in genomic signal processing, bioinformatics, and computational biology.

State and Parameter Estimation Problems in Complex Dynamical Networks
Friday, March 18, 2011
Sandip Roy

Read More

Hide

Abstract: State estimation and parameter identification for linear dynamical systems has been exhaustively studied in the Controls Engineering and Signal Processing literature over the last sixty years. Meanwhile, research on inference (estimation) in graphical models is flourishing in the computer science community. These two thrusts in estimation theory are complementary: the engineering approaches are often naturally suited for temporal dynamics, are concerned with continuous-valued states, and exploit linearity for tractability; meanwhile, the
computer science approaches are often focused on logical rather than temporal dependencies, capture discrete-valued phenomena, and exploit graph sparsity for tractability.

In this talk, we will put forth the viewpoint that new understandings of and methods for estimation are needed, that marry together the engineering and computer science perspectives. As motivation, we will identify estimation problems that arise in strategic management of modern dynamical networks. Specifically,
we will introduce concrete examples from three dynamical networks at quite different scales: 1) a network discovery problem for zoonotic disease spread, 2) competing security and state estimation problems in information-fusion tasks, and 3) a parameterization problem for a spatiotemporal weather-impact model with air transportation applications. We will argue that these estimation problems require fundamentally new and meshed techniques, that exploit the deep connection between a network's graph topology and its dynamics. After presenting a few preliminary results in this direction, we will consider a unified formulation of
the dynamical-network estimation problem, and postulate a promising approach to the unified problem.

Biography: Dr. Sandip Roy received a B.S. degree in Electrical Engineering from the University of Illinois at Urbana-Champaign in 1998, and M.S. and Ph.D. degrees in Electrical Engineering from the Massachusetts Institute of Technology in 2000 and 2003, respectively. Since 2003, he has been at the Washington State University (WSU) where he is currently an Associate Professor. Dr. Roy has also held various outside summer appointments, including at the University of Wisconsin and NASA's Ames Research Center. His research is focused on the control and design of complex dynamical networks, with application to air traffic control, sensor networking, and systems biology problems.

Duty Cycling in Wireless Sensor Networks: Design and Analysis of an Energy-efficient Randomized Scheme
Friday, March 11, 2011
Giacomo Ghidini

Read More

Hide

Abstract: In wireless sensor networks (WSNs), duty cycling of sensor nodes between dormant and active states is adopted to extend the network lifetime and/or allow for battery recharging in case of energy-harvesting devices, while maintaining connectivity and coverage. Both deterministic and randomized duty cycling schemes have been proposed in the literature. Randomized schemes are an attractive solution due to their limited communication overhead and simplicity, while their performance in terms of connection delay and duration can be evaluated using probability theory.

In this talk, we first make the case for an additional performance metric: the energy efficiency of a duty cycling scheme. We show that operations to be performed when sensors switch between dormant and active states (e.g., opening/closing radio connections, warming up MEMS sensors) consume energy (and time). We also show that state-of-the-art randomized schemes do not address this issue. Then, we introduce a novel energy-efficient randomized scheme. For the proposed Markov chain-based randomized scheme, we present mathematical analysis as well as experimental results on Sun SPOT sensor devices confirming that energy efficiency can be significantly improved, while not affecting connection delay and duration.

Biography: Giacomo Ghidini is a 3rd-year Ph.D. student under the supervision of Dr. Sajal K. Das in the Department of Computer Science and Engineering at the University of Texas at Arlington. He is a member of the Center for Research in Wireless Mobility and Networking (CReWMaN) at UT Arlington. In 2010 he held a research assistant position at Oracle Sun Labs in Menlo Park, CA. As a member of the Internet of Things Actualized (IoTA) research group, he developed an analysis-oriented visualization system for Sensor.Network, a data storage and exchange platform for the Internet of Things, under the supervision of Dr. Vipul Gupta.

Giacomo received his B. Comp. Eng. and M. Comp. Eng. degrees from the University of Bologna, Italy, in 2004 and 2008, respectively. He worked on his master thesis during a 6-month visit at CReWMaN on a scholarship of the College of Engineering of the University of Bologna. In 2006 he was an exchange student at the University of Technology, Sydney, Australia, on a scholarship of the University of Bologna. During his undergraduate studies, he interned at Siemens AG in Munich, Germany in 2000 and 2001, where he was a member of the Information and Communication Network Division under the supervision of Hr. Dr. Peter Hannss.

His current research interests include energy-efficient duty cycling in wireless sensor networks and the integration of wireless sensor networks in the Internet of Things.

PexForFun - Programming Exercises and Automatic Grading in the Cloud
Monday, March 07, 2011
Nikolai Tillmann

Read More

Hide

Abstract: PexForFun can be used to learn software programming at many levels, from high school all the way through graduate courses. With PexForFun, the student edits the code in any browser – with Intellisense - and we execute it and analyze it for you in the cloud. PexForFun supports C#, VisualBasic and F#. PexForFun finds interesting and unexpected input values that help students understand what their code is actually doing. Under the hood, PexForFun uses dynamic symbolic execution to thoroughly explore feasible execution paths. The real fun starts with Coding Duels where the student has to write code that implements a specification. PexForFun finds any discrepancies between the student’s code and the specification. PexForFun connects teachers, curriculum authors and students in a unique social experience, tracking and streaming progress updates in real time.

Biography: Nikolai Tillmann. Principal Research Software Design Engineer, Microsoft Research. Nikolai Tillmann works at Microsoft Research on combining dynamic and static program analysis techniques. He currently leads the Pex project, a framework for runtime verification and automatic test-case generation for .NET applications based on parameterized unit testing and dynamic symbolic execution. In another application, http://pexforfun.com, the same underlying technology forms the basis of a serious game that helps students improve their programming skills. He also works on the Spur project, a tracing Just-In-Time compiler for .NET and JavaScript code. Previously he worked on AsmL, an executable modeling language, and the Spec Explorer 2004 model-based testing tool. He co-developed XRT, a concrete/symbolic state exploration engine and software model-checker for .NET.

Cognition and Cooperation in Wireless Networks
Friday, March 04, 2011
Dinesh Rajan

Read More

Hide

Abstract: Next generation wireless systems need to support a multitude of services with a wide range of data rates and reliability requirements. The limited battery resource at a mobile terminal coupled with the hostile multipath fading channel makes the problem of providing reliable high data rate services challenging. Two recent approaches, cognitive radio (cognition) and cooperative communications (cooperation), have shown the potential to significantly improve the spectral efficiency. In this talk, we present an introduction to simple cognitive and cooperative strategies. We then focus on the Gaussian interference channel and evaluate the capacity gains offered by using leveraging cognitive knowledge. We also present novel coding and transmission schemes which show superior performance over existing coding methods. Potential generalizations of this work and some open challenges in this area will be discussed.

Biography: Dinesh Rajan received the B.Tech. degree in Electrical Engineering from Indian Institute of Technology (IIT), Madras in 1997. He received his M.S. and Ph.D. degrees in Electrical and Computer Engineering in 1999 and 2002, respectively, from Rice University, Houston, Texas. He joined the Electrical Engineering Department at Southern Methodist University, Dallas, Texas in August 2002, where he is currently an associate professor. His current research interests include communications theory, wireless networks, information theory and computational imaging.

Non-Cryptographic Authentication/Identification in Wireless Networks
Thursday, March 03, 2011
Kai Zeng

Read More

Hide

Abstract: Due to the open air nature, wireless networks are vulnerable to various identity-based attacks. Although traditional cryptographic techniques can potentially prevent identity-based attacks, they are either unavailable or insufficient in certain scenarios. For example, bootstrapping secure association between communication parties requires user identification but it cannot be solved solely by cryptographic mechanisms because of the lack of a pre-shared secret. In emerging wireless networks, such as cognitive radio networks, the primary users shall be identified at the signal level without relying on higher-layer cryptographic means. Furthermore, cryptography is usually considered expensive for resource constrained devices, such as sensors and RFIDs. In light of these circumstances, there is an increasing interest in enhancing or supplementing traditional authentication protocols in wireless networks with various lower/physical layer fingerprint/signature schemes.
This talk provides an overview of various non-cryptographic mechanisms for user authentication and device identification in wireless networks. Our recent works on bootstrapping secure association between nearby devices and detecting identity based attacks in cognitive radio and WiFi networks will be discussed.

Biography: Dr. Kai Zeng is currently a postdoctoral researcher in the Department of Computer Science at University of California, Davis. He received his Ph.D. degree in Electrical and Computer Engineering at Worcester Polytechnic Institute in 2008. He obtained his B.E degree in Communication Engineering and M.E degree in Communication and Information Systems both from Huazhong University of Science and Technology, China, in 2001 and 2004, respectively. His research interests are in network security and wireless networking.

Structured Sparsity and Its Applications on Medical Imaging and Machine Vision
Wednesday, March 02, 2011
Junzhou Huang

Read More

Hide

Abstract: Today, sparsity techniques have been widely used to address practical problems in the fields of medical imaging, machine learning, computer vision, data mining and image/video analysis. This talk will briefly introduce the related sparsity techniques and their successful applications on compressive sensing, sparse learning, computer vision and medical imaging. Then, we will build a new framework called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. The new sparsity techniques under this framework have been successfully applied to different applications, such as compressive MR image reconstruction, video background subtraction, object tracking in visual surveillance, tag separation in tMRIs, computer-aided diagnosis and so on. The improved experimental results in these applications demonstrate the effectiveness of our new framework on large scale data.

Biography: Junzhou Huang is a graduating PhD student in department of computer science at Rutger, The State University of New Jersey. His major research interests are focusing on medical imaging, machine learning and computer vision. He has published over 30 peer-reviewed articles in premier conferences and journals. He won the MICCAI Young Scientist Award 2010 in the society of medical imaging, and was selected as one of the 10 emerging leaders in multimedia and signal processing by the IBM T.J. Watson Research Center in 2010.

Analyzing the Effect of Buffer Size on Performance Evaluation of Wireless LANs
Friday, February 25, 2011
Kajal Arefin Imon and Sajob Datta

Read More

Hide

Abstract: The use of Wireless LANs is prevalent nowadays. Voice calls, video streaming, online gaming are some applications that we enjoy while connecting to the Internet through Wireless LANs. These applications are delay sensitive and their performance depends on many factors of the network conditions.

In this talk, we will analyze some trade-offs affecting the performance of wireless applications to end users. We will present a mathematical model for Wireless LANs that incorporates the underlying principles of such networks. We will show how our model can be used for capacity estimation (i.e., how many users can be supported simultaneously without sacrificing the quality of communication). The model can be extended for scenarios where multiple traffic classes
are also present.

Biography: Sk Kajal Arefin Imon received his B.Sc. degree in Computer Science and Engineering from Bangladesh University of Engineering and Technology (BUET). He is now a Ph.D. student in Computer Science and Engineering at the University
of Texas at Arlington and is affiliated with the Center for Research in Wireless Mobility and Networking (CReWMaN). His research interests include wireless networks, network modeling and analysis, cloud computing, and energy efficient routing.


Sajib Datta is a PhD student in Computer Science and Engineering at UTA. He is a researcher in CReWMaN. He received his B.S. degree in Computer Science from North South University, Bangladesh. His research interests include broadband
wireless networks, network modeling and analysis, VoIP, QoS/QoE and security in cloud computing.

Cross-Layer Customization Framework for Energy-Efficient and Real-Time Embedded Systems and Multi-Core Platforms
Friday, February 18, 2011
Peter Petrov

Read More

Hide

Abstract: High integration densities coupled with abundance of wireless
connectivity have resulted in many modern devices implemented as
complex computing systems. These applications usually feature a
large number of capabilities, such as aggregated multimedia functions
(speech, audio, video), communication protocols, security mechanisms,
user interfaces, and many others. The majority of these applications,
however, are extremely energy constrained, require real-time
guarantees, and increasingly often both. These constraints are
creating significant challenges to the traditional embedded system
design approaches, which are based on general-purpose processor
cores and system software infrastructure, and thus, suffer from
power inefficiency and poor real-time guarantees. Traditionally,
embedded systems have borrowed many general-purpose mechanisms
at all the layers of system design, including application
(with the associated compiler technology), operating system, and
hardware architecture. The strict layer separation has provided
for portability, maintainability, and flexibility in executing a
diverse range of programs. Even though many of these characteristics
are important in the embedded domain, the concomitant disadvantages
of energy inefficiency, poor real-time guarantees, and suboptimal
performance, create severe limits for many embedded applications.

In this talk I will describe a cross-layer customization methodology
for both uni- and multi-processor embedded platforms, which
while preserving the benefits of layer isolation and generality,
achieves energy-efficiency and improved real-time guarantees and
performance. By introducing an appropriate amount of configurability
at OS and hardware levels, relevant application knowledge is
propagated and utilized at run-time across the system layers. The
introduced techniques comprehensively cover the fundamental aspects
of memory management, task execution control, inter-processor data
communication, sharing, and coherence.

Biography: Dr. Peter Petrov obtained his BS and MS degrees in Computer
Science from Sofia University, Bulgaria and PhD degree in Computer
Engineering from the University of California, San Diego. His
research work is in the areas of low-power and real-time embedded
systems and more specifically in application-specific embedded
processors and multi-core platforms. Dr. Petrov is one of the
founders of the Workshop on Application Specific Processors and has
served as a General and Program Chair of this annual event. He is a
founding member and a steering committee member of the IEEE Symposium
on Application Specific Processors (SASP), for which he has also
served as a general and a program chair. Dr. Petrov has served as a
guest associate editor of the IEEE Transactions on Very Large Scale
Integration Systems for the Special Section on Application Specific
Processor in 2008. He also serves as a technical committee member
of several prominent IEEE/ACM conferences in the area of embedded
systems and hardware/software co-design.

TITLE: Integrative Analysis of Multi-Dimensional Cancer Genomic Data
Wednesday, February 09, 2011
Zhang Shihua

Read More

Hide

Abstract: Cells are complex systems with multiple layers of organization that interact and influence each other. The precise coordination among epigenetic status, transcriptions, translations, transportation, and metabolic reactions are essential in maintaining the function and robustness of the cellular systems. The emerging multi-dimensional genomics data presented unprecedented opportunities to study the cross-layer coordination, but also poses new challenges in data analysis. To our knowledge, no method has been previously proposed to jointly analyze genomic datasets with more than two dimensions. Here, we propose a joint matrix factorization framework as well as a sparse network-regularized version to address these challenges. We applied these methods to the genomics datasets of 385 ovarian cancer samples from the TCGA project. We performed extensive validation tests based on known functional gene set (GO, KEGG and GeneRIF pathway), overlap analysis for different level data, network and pathway analysis, and clinical association analysis. Our study provides a powerful analytical framework for uncovering the biological patterns and implications across multi-dimensional `omic' data.

Biography: Dr. Shihua Zhang is an assistant professor in the Academy of Mathematics and Systems Sciences at Chinese Academy of Science. His principal research interests lie in the development of computational models and algorithms in Bioinformatics and Systems Biology. Dr. Zhang received a Ph.D. in Operational Research from Academy of Mathematics and Systems Sciences, CAS and he is currently a postdoctoral researcher at University of Southern California.

Department Colloquium: Adaptation of Pervasive Computing Applications
Monday, February 07, 2011
Christian Becker

Read More

Hide

Abstract: Pervasive Computing is characterized by utilizing computing resources near the user that are often embedded in daily artifacts. In order not to burden configuration and adaptation effort on the user, systems need to find configurations whenever change in resources or user requirements demand this. In this talk I will cover aspects of adaptation in Pervasive Computing: contractual components that allow application adaption of single applications and current work addressing groups of applications that share the same contextual environment.

Biography: Christian Becker is a professor for Information Systems at the University of Mannheim since 2006. Prior to this he was a visiting professor for distributed systems at the University of Duisburg-Essen in Spring Term 2006. He studied Computer Science at the Universities of Karlsruhe and Kaiserslautern where he received the Diploma in 1996.In 2001 he received a PhD from the University of Frankfurt and joined the distributed systems group at the University of Stuttgart as Post Doc. His research focused on system support for Pervasive Computing and Context-Aware Computing. In 2004 he received the Venia Legendi (Habilitation) for Computer Science (Informatik). Christian’s research interests are distributed systems and Context-Aware Computing.

Artificial Life Simulation of Humans and Lower Animals: From Biomechanics to Intelligence
Thursday, December 09, 2010
Demetri Terzopoulos

Read More

Hide

Abstract: The confluence of virtual reality and artificial life, an emerging discipline that spans the computational and biological sciences, has yielded synthetic worlds inhabited by realistic artificial flora and fauna. Artificial animals are complex synthetic organisms with functional, biomechanically-simulated bodies, sensors, and brains with locomotion, perception, behavior, learning, and cognition centers. These biomimetic autonomous agents in their realistic virtual worlds foster deeper computationally-oriented insights into natural living systems. Virtual humans and lower animals are of great interest in computer graphics because they are self-animating graphical characters poised to dramatically advance the motion picture and interactive game industries. Furthermore, they engender new applications in computer vision, sensor networks, medical image analysis, and other domains.

Biography: Demetri Terzopoulos is the Chancellor's Professor of Computer Science at UCLA. He graduated from McGill University and received his PhD degree from MIT in 1984. He is a Fellow of the IEEE, a Fellow of the Royal Society of Canada, and a member of the European Academy of Sciences. His many awards include an Academy Award for Technical Achievement from the Academy of Motion Picture Arts and Sciences for his pioneering work on physics-based computer animation. Recently, he was the inaugural recipient of the IEEE Computer Vision Significant Researcher Award. He is one of the most highly-cited engineers and computer scientists, with more than 300 published research papers and several volumes, primarily in computer graphics, computer vision, medical imaging, computer-aided design, and artificial intelligence/life. Terzopoulos joined UCLA in 2005 from New York University, where he held the Lucy and Henry Moses Professorship in Science and was Professor of Computer Science and Mathematics at NYU's Courant Institute. Previously he was Professor of Computer Science and Professor of Electrical and Computer Engineering at the University of Toronto, where he retains status-only faculty appointments. www.cs.ucla.edu/~dt

Opportunistic Routing Algorithms for Delay Tolerant Networks
Friday, December 03, 2010
Eyuphan Bulut

Read More

Hide

Abstract: Delay Tolerant Networks (DTNs), also called as intermittently connected mobile networks, are wireless networks in which a fully connected path from source to destination is unlikely to exist. Therefore, in these networks, message delivery relies on opportunistic routing where nodes use store-carry-and-forward paradigm to route the messages. However, effective forwarding based on a limited knowledge of contact behavior of nodes is challenging. In this talk, we will mainly discuss the routing problem in DTNs from several aspects and present four novel algorithms for different DTN environments: (i) multi-period multi-copy based Spray and Wait routing algorithm where the copies are distributed to the nodes in different periods, (i) multi-period erasure coding based routing algorithm where the optimal erasure coding parameters for different periods are selected to minimize the cost, (iii) efficient single copy based routing algorithm where the correlation between the mobility of nodes are utilized, and (iv) social structure-aware routing algorithm where message exchanges between nodes are performed considering the grouping behavior of nodes. In all of these algorithms, our common objective is to increase the message delivery ratio and decrease the average delivery delay while minimizing the routing cost (number of copies used per message or number of forwardings of a single message between the nodes) under given circumstances. We will also present simulation (based on both real and synthetic DTN traces) results regarding the performance comparison of the proposed algorithms with the state-of-the-art routing algorithms in DTNs.

Biography: Eyuphan Bulut received the B.S. and M.S. degrees in computer engineering from Bilkent University, Ankara, Turkey, in 2005 and 2007, respectively. He is currently a Ph.D. candidate in the Computer Science Department of Rensselaer Polytechnic Institute (RPI), Troy, NY. His interests include design of protocols for wireless ad hoc and sensor networks including routing protocols, target tracking and topology control algorithms.

Large-scale Image Classification via Geometric Coding
Monday, November 15, 2010
Kai Yu

Read More

Hide

Abstract: In this talk I will introduce our research on image classification and object recognition. We focus on applying linear SVMs on nonlinear encoding of local features to achieve both high accuracy and scalability. The most popular "bag-of-visual-words" representation can be seen as a vector quantization (VQ) coding approach. We generalize VQ to a family of unsupervised nonlinear coding methods, which explore the geometry of distributions of local patches. The methods have achieved state-of-the-art results on many challenging image classification tasks, including Caltech 101, Caltech 256, PASCAL VOC, and ImageNet.

Biography: Dr. Kai Yu is the Head of Media Analytics Department at NEC Laboratories America, where he leads the research in image understanding, video surveillance, augmented reality, machine learning, and data mining. He serves as Area Chair at ICML 2010/2011, and PC member at many leading machine learning, data mining, information retrieval and computer vision conferences.
His research team won Winner Prizes and top performers at various prestigious challenges, including PASCAL VOC 2009, ImageNet 2010, and TRECVID 2008/2009, and developed cutting-edge commercial technologies featured by CNN and WSJ. He received Ph.D in CS from University of Munich, Germany, in 2004, and worked at Siemens as Senior Research Scientist before joining NEC in 2006.

Towards Helmets that Can Read Your Mind
Friday, November 12, 2010
Roozbeh Jafari

Read More

Hide

Abstract: In the field of Brain-Computer Interface (BCI), researchers have been
investigating how to allow totally paralyzed or ‘locked-in’ persons to
interact with software or to control hardware such as wheelchairs and
prosthetics. A wearable electroencephalography (EEG) sensory system
that can assess the effectiveness of advertisements or perhaps track
the cause of disorders including neurodegenerative, obesity or drug
addiction are among other examples. Enabling these applications with
the aid of wearable and mobile computers can revolutionize our daily life.
In this talk, we will present light-weight EEG signal processing methodologies
for BCI and for resource constrained wearable platforms. The ultimate
objective in design of wearable platforms is to reduce the power consumption,
mainly to reduce the form factor and the battery size. We will illustrate
techniques that identify and execute spatial, temporal and spectral
templates in an optimal order such that the computational load is minimized.
We will present our results on EEG data from inhibition task (‘Go’/’NoGo’)
and will demonstrate the effectiveness of our proposed techniques. We will
further discuss a few other examples of wearable computers and their
applications.

Biography:
Dr. Roozbeh Jafari received his B.Sc. in Electrical Engineering from
Sharif University of Technology in 2000. He received an M.S. in
Electrical Engineering from SUNY at Buffalo, and an M.S. and a Ph.D. in
Computer Science from UCLA in 2002, 2004 and 2006 respectively.
He spent 2006-2007 in EECS department at UC Berkeley as a post-doctoral
researcher. Dr. Jafari is currently an assistant professor in Electrical
Engineering at the University of Texas at Dallas. His research is
primarily in the area of networked embedded system design and
reconfigurable computing with emphasis on medical/biological
applications, their signal processing and algorithm design.

Building Watson -- A Brief Overview of DeepQA and the Jeopardy! Challenge
Thursday, November 11, 2010
Nanda Kambhatla

Read More

Hide

Abstract: A computer system that can directly and precisely answer natural language questions over an open and broad range of knowledge has been envisioned by scientists and writers since the advent of computers themselves. While current computers can store and deliver a wealth of digital content created by humans, they are unable to operate over it in human terms. The quest for building a computer system that can do open-domain Question Answering is ultimately driven by a broader vision that sees computers operating more effectively in human terms rather than strictly computer terms. They should function in ways that understand complex information requirements, as people would express them, for example, in natural language questions or interactive dialogs. Computers should deliver precise, meaningful responses, and synthesize, integrate, and rapidly reason over the breadth of human knowledge as it is most rapidly and naturally produced -- in natural language text.

The DeepQA project at IBM shapes a grand challenge in Computer Science that aims to illustrate how the wide and growing accessibility of natural language content and the integration and advancement of Natural Language Processing, Information Retrieval, Machine Learning, Knowledge Representation and Reasoning, and massively parallel computation can drive open-domain automatic Question Answering technology to a point where it clearly and consistently rivals the best human performance. A first stop along the way is the Jeopardy! Challenge, where we are planning to build an automated system that will compete with human grand champions in the game of Jeopardy!. In this talk, we will give an overview of the DeepQA project and the Jeopardy! Challenge.

Biography: Nanda Kambhatla has nearly two decades of research experience in the areas of Natural Language Processing (NLP), text mining, information extraction, dialog systems, and machine learning. He holds 7 U.S patents and has authored over 40 publications in books, journals, and conferences in these areas. Nanda holds a B.Tech in Computer Science and Engineering from the Institute of Technology, Benaras Hindu University, India, and a Ph.D in Computer Science and Engineering from the Oregon Graduate Institute of Science & Technology, Oregon, USA.

Currently, Nanda is the senior manager of the Human Language Technologies department at IBM Research - India, Bangalore. He leads a group of over 20 researchers focused on research in the areas of NLP, advanced text analytics (IE, IR, sentiment mining, etc.), speech analytics and statistical machine translation. Most recently, Nanda was the manager of the Statistical Text Analytics Group at IBM's T.J. Watson Research Center, the Watson co-chair of the Natural Language Processing PIC, and the task PI for the Language Exploitation Environment (LEE) subtask for the DARPA GALE project. He has been leading the development of information extraction tools/products and his team has achieved top tier results in successive Automatic Content Extraction (ACE) evaluations conducted by NIST for extracting entities, events and relations from text from multiple sources, in multiple languages and genres.

Earlier in his career, Nanda has worked on natural language web-based and spoken dialog systems at IBM. Before joining IBM, he has worked on information retrieval and filtering algorithms as a senior research scientist at WiseWire Corporation, Pittsburgh and on image compression algorithms while working as a postdoctoral fellow under Prof. Simon Haykin at McMaster University, Canada. Nanda's research interests are focused on NLP and technology solutions for creating, storing, searching, and processing large volumes of unstructured data (text, audio, video, etc.) and specifically on applications of statistical learning algorithms to these tasks.

The Interdependency between Core Network Capacity and Radio Access Network Performance
Friday, November 05, 2010
Will Egner

Read More

Hide

Abstract: We will review practical network performance consideration in ultra large Networks. We will analyze how poorly configure networks combine with Smart Phones to overload the network control plane resources and ultimate effect key Wireless KPIs such as Accessibility and Retain-ability. We will be providing both 2G and 3G examples of how the Core Network Optimization can directly improve these key Wireless KP

Biography: Dr. Will Egner is a technical leader with over 20 years of industry experience in network consulting, system engineering and software product development.

Dr. Egner launched his career as a system engineer for Texas Instruments developing advanced Digital Signal Processing algorithms for autonomous weapon systems. After leaving Texas Instruments, he moved to join Nortel Networks Wireless System Engineering Team. During his tenure, he successfully collaborated with service providers globally to optimize, expand and evolve their networks. As a Senior Manager at Nortel Networks, Dr. Egner was responsible for incubating and growing several high impact network planning and technology teams. In 2000, Egner joined Glow Networks as Chief Network Architect where he led a Product development of "optically-intelligent"
software that automated design of DWDM optical networks.

In 2003, Dr. Egner co-founded Cerion Optimization Services (www.cerioninc.com),
a Consulting and Software Company focused on delivering "Methodology and Cutting-Edge Software to power better decisions". He currently serves as Chief Technology Officer, North America.

Dr. Egner holds several patents in the communication systems field and has authored several papers in this area. Will graduated with B.S.E.E from Clarkson University, M.S.E.E from Georgia Institute of Technology, and Ph.D. from the University of Texas, Arlington. His doctorate thesis topic focused on dynamic re-configuration of wireless communication networks.

Proactive Malware Defense: New Techniques and Two Case Studies
Thursday, October 21, 2010
Guofei Gu

Read More

Hide

Abstract: Most of the attacks and fraudulent activities on the Internet are carried out by malware. For example, botnets, the state-of-the-art malware, are now the primary "platforms" for cyberattacks, e.g., spam, DDoS, and data theft. Most of our current solutions to malware defense are still passive and reactive, focusing on defending against known attacks. The situation is becoming worse and worse because the economic engine of profit-driven malware attacks are quickly transforming the threat and defense landscape to favor more and more attackers, as they enjoy many fundamental advantages over defenders (known as asymmetries of security).

In this talk, I propose to put more research focus on "proactive" malware defense strategies and develop "game-changing" defense approaches. In particular, I will introduce two case studies of such proactive malware defense techniques we have developed. In the first case study, I will present new techniques to automatically detect existing “unknown” vulnerabilities in software so that we can find out (and hopefully fix) the problems ahead of attackers. Our prototype system has already found dozens of previously unknown vulnerabilities in popular software such as Adobe Acrobat and Microsoft Paint. In the second case study, I will present new active probing techniques that can greatly complement existing botnet detection solutions

Biography: Guofei Gu is an assistant professor in the Department of Computer Science & Engineering at Texas A&M University. Before coming to Texas A&M, he received his Ph.D. degree in Computer Science from the College of Computing, Georgia Institute of Technology. His research interests are in network and system security, such as malware analysis/detection/defense, intrusion/anomaly detection, and web and social network security. Dr. Gu is a recipient of 2010 NSF CAREER award and a co-recipient of 2010 IEEE Symposium on Security & Privacy (Oakland'10)best student paper award. He is currently directing the SUCCESS (Secure Communication and Computer Systems) Lab at TAMU.

Reliable and Energy-Efficient Data Delivery in Sparse Wireless Sensor Networks with Multiple Mobile Sinks
Wednesday, September 08, 2010
Giuseppe Anastasi

Read More

Hide

Abstract: This talk addresses the problem of reliable and energy-efficient data delivery in sparse Wireless Sensor Networks (WSNs) with multiple Mobile Sinks (MSs). This is a critical task, especially when MSs move randomly, as interactions with sensor nodes are unpredictable, typically of short duration, and affected by message losses. In addition, multiple MSs can be simultaneously present in the sensor contact area making the minimum energy data delivery a complex optimization problem. To solve the above issues, we propose a novel protocol that combines efficiently erasure coding with an ARQ scheme. The key features of the proposed protocol are: (i) the use of redundancy to cope efficiently with message losses in the multiple mobile sink environment, and (ii) the ability of adapting the level of redundancy based on feedbacks sent back by MSs through ACKs. We observed by simulation that our protocol outperforms an alternative protocol that relies only on an ARQ scheme, even when there is a single MS. We also validated our simulation results through a set of experimental measurements based on real sensor nodes. Our results show that the
adoption of encoding techniques increases the lifetime of the sensor in the range [40% - 55%] compared to simple ARQ approaches when applied to WSNs with MSs.

Biography: Giuseppe Anastasi is an associate professor of Computer Engineering at the Department of Information Engineering of the University of Pisa, Italy. He received the MS degree in Electronics Engineering, and the PhD degree in Computer Engineering, both from the University of Pisa, in 1990 and 1995, respectively. His research interests include pervasive computing systems, sensor networks, and green computing. He is the founding co-chair of the Pervasive Computing & Networking Laboratory (PerLab), and has contributed to many research programs funded by both national and international institutions. He is a co-editor of the book Advanced Lectures in Networking (LNCS 2497, Springer, 2002), and has published about 90 research papers in the area of computer networking and pervasive computing. He is an area editor of the Elsevier journals of Pervasive and Mobile Computing (PMC) and Computer Communications (ComCom). Recently, he served as Program Chair of IEEE PerCom 2010. Previously, he also served as General Co-chair of IEEE WoWMoM 2005, Program Co-chair of IEEE WoWMoM 2008, Vice Program Chair of IEEE MASS 2007, Workshops Chair of IEEE PerCom 2006, IEEE WoWMoM 2006, and IEEE ICCCN 2007. He launched the International Workshop on Sensor Networks and Systems for Pervasive Computing (PerSeNS), co-located with IEEE PerCom. He has been a member of the IEEE Computer Society since 1994.

Collaborative Intrusion Detection in Mobile Ad-Hoc Networks
Friday, May 28, 2010
Raja Datta

Read More

Hide

Abstract: Due to the inherent characteristics of a mobile ad hoc network (MANET), such as mobility, wireless communication
and lack of any centralized authority, providing security in a MANET is a challenging task to say the least.
One approach towards providing security is to implement an Intrusion Detection System (IDS), which will detect
intrusion by malicious nodes if any in the network and help the network respond to it accordingly. Although several
intrusion detection schemes have been proposed in the literature, important problematic issues such as colluding
of malicious nodes are not taken care of. Another main issue that concerns MANET is that the mobile devices are
battery-powered and have limited computational resources. Hence, whatever may be the security mechanism that
is being deployed in the MANET, it is also necessary to ensure that it does not consume too much of the battery
power and the computational resource of the mobile nodes. In this talk, first some secure addressing schemes will
be discussed. Next two intrusion detection techniques for MANET will be presented that uses collaborative techniques.
The detection system takes care of colluding malicious nodes. Lastly, a proposal based on game theory for efficient
activation of IDSs will be discussed that can be used in a MANET without compromising on its effectiveness.

Biography: Dr. Raja Datta received his B.E. in Electronics and Telecommunications Engineering from National Institute of Technology
Silchar, India in 1988. He did his M.Tech. in Computer Engineering and Ph.D in Computer Science and Engineering,
both from Indian Institute of Technology (IIT) Kharagpur, India. Currently he is an Associate Professor in the
Department of Electronics and Electrical Communication Engineering at Indian Institute of Technology (IIT), Kharagpur.
He has several publications in International Journals and Conferences and is the member of review board of several
International journals. Dr. Datta is the Chief Investigator of two sponsored projects funded by Indian Space Research
Organization (ISRO) and Department of Information Technology (DIT), Govt. of India. He is also the Chief Consultant
of a consultancy project for DRDO for developing TCP over MANET. His main research interests include Mobile Ad-hoc
and Sensor Networks, WDM Optical Networks, Computer Architecture and Distributed Processing.

Cooperative Control for Distributed Networked Teams
Friday, April 30, 2010
Frank L. Lewis

Read More

Hide

Abstract: Distributed systems of agents linked by communication networks only have access to information from their neighboring agents, yet must achieve global agreement on team activities to be performed cooperatively. Examples include networked manufacturing systems, wireless sensor networks, networked feedback control systems, and the internet. Sociobiological groups such as flocks, swarms, and herds have built-in mechanisms for cooperative control wherein each individual is influenced only by its nearest neighbors, yet the group achieves consensus behaviors such as heading alignment, leader following, exploration of the environment, and evasion of predators. It is known that groups of fireflies and of crickets align their frequencies, neurons in the brain fall into patterns of interacting burst phenomena, and biological groups fall into the circadian rhythm. It was shown by Charles Darwin that local interactions
between population groups over long time scales lead to global results such as the evolution of species.

This talk will review ideas of cooperative control for networked interacting teams. Included are local voting protocols, second order consensus, synchronization of distributed interacting oscillators. Local protocols based only on interactions between neighbors lead to global optimal behavior of distributed teams. Results from graph theory show the importance of the communication structure on the agreement reached by the networked team. Results from Lyapunov theory show the convergence to consensus values for nonlinear interaction protocols.

Consensus performance generally depends on the communication graph structure and cannot be independently controlled. This can pose severe limitations on the performance of distributed systems, including slow speeds of consensus and convergence to uncontrollable values. Some protocols are described which allow a team to reach consensus agreement that is independent of the communication graph structure, and can be effectively controlled by team leaders or cooperative decision makers.

Biography: Prof. Frank L. Lewis, Fellow IEEE, Fellow IFAC, Fellow U.K. Institute of Measurement & Control, PE Texas, U.K. Chartered Engineer, is Distinguished Scholar Professor and Moncrief-O’Donnell Chair at University of Texas at Arlington’s Automation & Robotics Research Institute. He obtained the Bachelor's Degree in Physics/EE and the MSEE at Rice University, the MS in Aeronautical Engineering from Univ. W. Florida,and the Ph.D. at Georgia Tech. He works in feedback control, intelligent systems, distributed control systems, and sensor networks. He is author of 6 U.S. patents, 216 journal papers, 330 conference papers, 14 books, 44 chapters, and 11 journal special issues. He received the Fulbright Research Award, NSF Research Initiation Grant, ASEE Terman Award, Int. Neural Network Soc. Gabor Award 2009, U.K. Inst Measurement & Control Honeywell Field Engineering Medal 2009. Received Outstanding Service Award from Dallas IEEE Section, selected as Engineer of the year by Ft. Worth IEEE Section. Listed in Ft. Worth Business Press Top 200 Leaders in Manufacturing. He served on the NAE Committee on Space Station in 1995. He is an elected Guest Consulting Professor at South China University of Technology and Shanghai Jiao Tong University. Founding Member of the Board of Governors of the Mediterranean Control Association. Helped win the IEEE Control Systems Society Best Chapter Award (as Founding Chairman of DFW Chapter), the National Sigma Xi Award for Outstanding Chapter (as President of UTA Chapter), and the US SBA Tibbets Award in 1996 (as Director of ARRI’s SBIR Program).

Motion Planning for Physical Systems
Monday, April 26, 2010
Lydia E. Kavraki

Read More

Hide

Abstract: Over the last decade, the development of robot motion planning algorithms to solve complex geometric problems has not only contributed to advances in industrial automation and autonomous exploration, but also to a number of diverse fields such as graphics animation and computational structural biology.
This talk will relate the current state-of-the-art and detail on-going work on developing sampling-based planners for systems with increased physical realism. Recent advances in planning for hybrid systems will be described, as well as the challenges of combining formal logic and planning for creating safe and reliable systems. The talk will then briefly demonstrate how the experience gained through robotics planning has led to algorithmic tools for analyzing the flexibility and interactions of biomolecules for drug discovery.

Biography: Lydia E. Kavraki is the Noah Harding Professor of Computer Science and Professor of Bioengineering at Rice University. She also holds a joint appointment at the Department of Structural and Computational Biology and Molecular Biophysics at the Baylor College of Medicine in Houston. Kavraki received her B.A. in Computer Science from the University of Crete in Greece and her Ph.D. in Computer Science from Stanford University working with Jean-Claude Latombe. Her research contributions are in physical algorithms and their applications in robotics and computational structural biology and bioinformatics. Kavraki is a Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), a Fellow of the American Institute for Medical and Biological Engineering (AIMBE), and a Fellow of the World Technology Network. She currently serves as a Distinguished Lecturer for the IEEE Robotics and Automation Society.

DataGuard: Dynamic Data Attestation in Wireless Sensor Networks
Friday, April 23, 2010
Donggang Liu

Read More

Hide

Abstract: Attestation has become a promising approach for ensuring software integrity in wireless sensor networks. However, current attestation either focuses on static system properties, e.g., code integrity, or requires hardware support such as Trusted Platform Module (TPM). There are attacks exploiting vulnerabilities that do not violate static system properties, and sensor platforms may not have hardware-based security support. This paper presents a software attestation scheme for dynamic data integrity based on data boundary integrity. It automatically transforms the source code and inserts data guards to track run- time program data. A data guard is unrecoverable once it is corrupted by an attacker, even if the attacker fully controls the system later. The corruption of any data guard at run time can be remotely detected. A corruption either indicates a software attack or a bug in the software that needs immediate attention. The benefits of the proposed attestation scheme are as follows. First, it does not rely on any additional hardware support, making it suitable for low-cost sensor nodes. Second, it introduces minimal communication cost and has adjustable run time memory overhead. Third, it works even if sensor nodes use different hardware platforms, as long as they run the same software. The prototype implementation and the experiments on TelosB motes show that the proposed technique is both effective and efficient for sensor networks.

Biography: Dr. Donggang Liu is an Assistant Professor of Computer Science and Engineering at the University of Texas at Arlington (UTA). He joined UTA in August 2005 after he graduated from North Carolina State University with a PhD degree in Computer Science. His research interests are in computer and network security, particularly in ad-hoc network security and software security. His research has been supported by the National Science Foundation (NSF) and the Army Research Office (ARO).

Efficient L0-norm Constrained Nonnegative Matrix Factorization
Friday, April 23, 2010
Vamsi Potluru

Read More

Hide

Abstract: Nonnegative Matrix Factorization (NMF) is now a standard tool
for data analysis. An important variant is the sparse NMF
problem. We consider both versions of the problem where sparsity
(measured by L$0$) is imposed on one or both of the
estimated factors either implicitly or explicitly .
Although algorithms for solving these are available, they
are typically inefficient.
Besides, the explicit version is solved by using the L$_1$ norm as a
proxy for the L$0$-constraint .



We propose an efficient algorithm to handle the norm constraint
which arises when solving the both versions of the problem.
Our algorithm is faster than existing algorithms in the implicit
version and can handle the L$0$-constraint in the
explicit version directly.
This is shown by comparing our algorithm with the competing
algorithms on various data sets of practical interest.

Biography:

SPUR: A Trace-Based JIT Compiler for CIL
Monday, April 19, 2010
Nikolai Tillmann

Read More

Hide

Abstract: Tracing just-in-time compilers (TJITs) determine frequently executed traces (hot paths and loops) in running programs and focus their optimization effort by emitting optimized machine code specialized to these traces. Prior work has established this strategy to be especially beneficial for dynamic languages such as JavaScript, where the TJIT interfaces with the interpreter and produces machine code from the JavaScript trace.

This direct coupling with a JavaScript interpreter makes it difficult to harness the power of a TJIT for other components that are not written in JavaScript, e.g., the DOM mplementation or the layout engine inside a browser. Furthermore, if a TJIT is tied to a particular high-level language interpreter, it is difficult to reuse it for other input languages as the optimizations are likely targeted at specific idioms of the source language.

To address these issues, we designed and implemented a TJIT for Microsoft's Common Intermediate Language CIL (the target language of C#, VisualBasic, F#, and many other languages). Working on CIL enables TJIT optimizations for any program compiled to this platform. In addition, to validate that the performance gains of a TJIT for JavaScript do not depend on specific idioms of JavaScript that are lost in the translation to CIL, we provide a performance evaluation of our JavaScript runtime which translates JavaScript to CIL and then runs on top of our CIL TJIT.

Biography: Nikolai Tillmann is a Principal Research Software Design Engineer in the Research in Software Engineering (RiSE) group at Microsoft Research. He is leading the Pex project, a framework for automated test case generation for .NET applications based on parameterized unit testing and dynamic symbolic execution. Nikolai is also involved in the Spur project, where he works on a tracing Just-In-Time compiler for .NET and JavaScript code. Previously, he worked on AsmL and Spec Explorer, which are an executable modeling language and a model-based testing tool, which is now used on a large scale in Microsoft to facilitate quality assurance of protocol documentation.

Before coming to Microsoft Research, Nikolai received his M.S. (Diplom) in Computer Science from the Technical University of Berlin in 2000, and was involved in the development of a school management system in Germany.

SPUR: A Trace-Based JIT Compiler for CIL
Monday, April 19, 2010
Nikolai Tillmann

Read More

Hide

Abstract: Tracing just-in-time compilers (TJITs) determine frequently executed traces (hot paths and loops) in running programs and focus their optimization effort by emitting optimized machine code specialized to these traces. Prior work has established this strategy to be especially beneficial for dynamic languages such as JavaScript, where the TJIT interfaces with the interpreter and produces machine code from the JavaScript trace.

This direct coupling with a JavaScript interpreter makes it difficult to harness the power of a TJIT for other components that are not written in JavaScript, e.g., the DOM mplementation or the layout engine inside a browser. Furthermore, if a TJIT is tied to a particular high-level language interpreter, it is difficult to reuse it for other input languages as the optimizations are likely targeted at specific idioms of the source language.

To address these issues, we designed and implemented a TJIT for Microsoft's Common Intermediate Language CIL (the target language of C#, VisualBasic, F#, and many other languages). Working on CIL enables TJIT optimizations for any program compiled to this platform. In addition, to validate that the performance gains of a TJIT for JavaScript do not depend on specific idioms of JavaScript that are lost in the translation to CIL, we provide a performance evaluation of our JavaScript runtime which translates JavaScript to CIL and then runs on top of our CIL TJIT.

Biography: Nikolai Tillmann is a Principal Research Software Design Engineer in the Research in Software Engineering (RiSE) group at Microsoft Research. He is leading the Pex project, a framework for automated test case generation for .NET applications based on parameterized unit testing and dynamic symbolic execution. Nikolai is also involved in the Spur project, where he works on a tracing Just-In-Time compiler for .NET and JavaScript code. Previously, he worked on AsmL and Spec Explorer, which are an executable modeling language and a model-based testing tool, which is now used on a large scale in Microsoft to facilitate quality assurance of protocol documentation.

Before coming to Microsoft Research, Nikolai received his M.S. (Diplom) in Computer Science from the Technical University of Berlin in 2000, and was involved in the development of a school management system in Germany.

Reliability and Energy-efficiency of IEEE 802.15.4/ZigBee Wireless Sensor Networks
Friday, April 09, 2010
Mario Di Francesco

Read More

Hide

Abstract: Wireless Sensor Networks (WSNs) provide a very promising solution for a wide range of application scenarios. In this context, IEEE 802.15.4 and ZigBee have emerged as de-facto standards for communications. However, WSNs based on such standards seriously suffer from reliability, particularly when power management is enabled for energy conservation. We will first demonstrate that this issue is mainly due to the default MAC parameters settings suggested by the IEEE 802.15.4 standard. We will next show that, with a more appropriate parameters setting, it is possible to achieve the desired level of communication reliability, as well as a higher energy efficiency. Finally, we will present a novel scheme, called ADaptive Access Parameters Tuning (ADAPT), for dynamically tuning the MAC parameters that exploits the current network conditions and a target level of reliability requested
by the applications.

Biography: Dr. Mario Di Francesco is a Research Associate in the Center for Research in Wireless Mobility and Networking (CReWMaN), University of Texas at Arlington. He received his PhD from the Department of Information Engineering at the University of Pisa (Italy) in May 2009. He was a visiting scholar at UTA during Fall 2008 and also a research fellow in the Real Time Systems Lab (RETIS) of the Scuola Superiore S. Anna in Pisa (Italy). His research interests include performance evaluation and design of adaptive algorithms for wireless sensor networks.

Zeptotech and Zettaflops: Need for Speed in Diagnostics using Nano-Bio-Molecular Sensors
Friday, March 26, 2010
Samir Iqbal

Read More

Hide

Abstract: The ability to electrically sense and characterize biological entities at the single molecule level can facilitate rapid diagnostics
and better therapeutics. Miniaturization of standard biochemical test platforms on biochips can provide useful information
from just thousands of target molecules but data deluge between sensors and networks is a major bottleneck. Nano-scale
molecular sensors can provide vital information about patients, but much needs to be done for real-time data acquisition,
comparison, analysis, retrieval and decision making while maintaining safety and security of the patient records.

This talk will focus on our work in nano-bio sensors that provide information at gene, protein and cellular levels about the
state of diseases like cancer. Stochastic nature of the information from such genomic- and proteomic-sensors requires highly
predictive methods and approaches at many levels, to distinguish a diseased cell from a normal cell.

Biography: Dr. Samir Iqbal is an Assistant Professor in Electrical Engineering at the University of Texas at Arlington (UTA). After receiving
his Ph.D. from Purdue University in 2007, he established Nano-Bio Lab at UTA. His lab focuses on the design of novel solid-state
sensors for the selective detection of biological molecules and elucidation of molecular interactions. He is also affiliated with the
Nanotechnology Research and Teaching Facility (NanoFAB) and serves on the Joint Graduate Studies Committee of the Joint
Bioengineering Program between UTA Department of Bioengineering and UTSW Medical Center.

Salsa-ReDS: Reputation for enhancing the robustness of P2P systems
Friday, March 12, 2010
Matthew Wright

Read More

Hide

Abstract: Salsa is one of several recent designs for a structured peer-to-peer system that
uses path diversity and redundancy to ensure greater robustness from attackers
in the lookup process. In this talk, we first describe the Salsa architecture and
discuss the general problem of distributed directory services in open systems. We then present Salsa-ReDS (Salsa with Reputation for Directory Services), a simple but powerful way to further improve the robustness of Salsa lookups. In Salsa-ReDS, each node tracks the performance of its peers in each lookup and uses that information to gauge the relative reliability of the peers for future
lookups. We show in simulation that technique can greatly reduce the chance of an attacker manipulating the lookup results or maintain the same robustness with lower overhead. We conclude by describing how the ReDS idea can also be applied to other systems and some of the potential pitfalls, challenges, and opportunities for future research in this approach

Biography: De. Matthew Wright is an assistant professor at the University of Texas at Arlington. He graduated with his Ph.D from the Department of Computer Science at the University of Massachusetts in May, 2005, where he earned his M.S. in 2002. His dissertation work addresses the robustness of anonymous communications. His other interests include intrusion detection, security and privacy in mobile and ubiquitous systems, and the application of incentives and game theory to security and privacy problems. Previously, he earned his B.S. degree in Computer Science at Harvey Mudd College. He is a recipient of the NSF CAREER Award and the Outstanding Paper Award at the 2002 Symposium on Network and Distributed System Security

Dynamic Scene Segmentation from Two Views
Wednesday, March 10, 2010
Ninad Thakoor

Read More

Hide

Abstract: Dynamic scene interpretation is at the heart of many computer vision applications such as video surveillance, video retrieval, navigation of mobile robots, intelligent environments, and assistance technologies for visually impaired or elderly. A typical dynamic scene includes multiple
independently moving objects, which are being captured by a moving camera. Segmenting these unknown number of moving objects is a vital step towards interpretation of a dynamic scene.

The segmentation problem can be seen as model-based clustering with an unknown number of clusters. For model-based clustering, to assign a data point to an appropriate cluster, the number of clusters and the corresponding cluster parameters should be known. On the other hand, the cluster parameters can be computed only if the cluster assignments are known. This "chicken-and-egg" dilemma leads to an iterative formulation for model-based clustering.

Clustering aims to optimize a cost to achieve an optimal solution. If the number of clusters is increased, generally, the cost for the same data reduces. Thus for meaningful clustering, the clustering cost must be penalized for additional clusters. A variety of model selection methods
exist which incorporate this idea. To apply model selection to clustering, candidate models are generated sequentially by varying the number of clusters and the best model according to a model selection criterion is selected.

For the image data encountered in computer vision applications, the iterative and sequential problem of model selection can be simplified to a one step optimization by using the knowledge that the clusters formed in an image are spatially coherent. The candidates for cluster parameters can be
generated by sampling spatially coherent image data points. Once the candidates are known, a subset of these candidates can be selected by optimizing a model selection criterion. This transforms the problem into a one step model selection problem which is solved by a novel branch-and-bound process. The proposed approach efficiently searches the solution space and guaranties optimality over the current set of hypotheses.

Biography: Ninad Thakoor received the B.E. degree in electronics and telecommunication engineering from University of Mumbai, Mumbai, India, in 2001, and the M.S. and Ph.D. in electrical engineering from the University of Texas at Arlington in 2004 and 2009, respectively, under the supervision of Dr. Jean Gao. His research interests include visual object recognition, stereo disparity segmentation, and
structure-and-motion segmentation.

Optimization of switched diversity systems and its application to a multiuser scheduler
Monday, March 08, 2010
Haewoon Nam

Read More

Hide

Abstract: This talk first addresses an optimization problem in a switched diversity system, which is a popular and indispensable technology for low-cost mobile devices. The goal is to find the optimal switching threshold(s) based on the idea of per-branch threshold in order to maximize the output signal-to-noise ratio (SNR) or to minimize the bit-error-rate (BER). The numerical and simulation results show that using per-branch threshold not only provides an simpler computation of the threshold(s) but also offers a higher capacity than the conventional system based on a single threshold. Then we discuss the application of the per-branch threshold idea into a multiuser scheduling problem along with a user grouping concept. Finally, a resource allocation problem in cognitive radio systems based on location information is briefly introduced. In addition, a low complexity transmitter design of 4G wireless cellular systems is also briefly discussed if time allows.

Biography: Dr. Haewoon Nam received the Ph.D. degree in Electrical and Computer Engineering from the University of Texas at Austin in December 2006. >From 1999 to 2002, he was with Samsung electronics, where he was engaged in the design and development of CDMA and GSM/GPRS baseband modem processors. In the summer of 2003, he was with the IBM T.J. Watson research center, Yorktown Heights, NY, where he performed extensive radio channel measurements and analysis at 60GHz. In the fall of 2005, he was with the Freescale semiconductor where he was engaged in the design and test of WiMAX MAC layer. His industry experience also includes work at Samsung Advanced Institute of Technology where he participated in the simulation of MIMO systems for the 3GPP LTE standard. In October 2006, he joined the Mobile Devices Technology Office, Motorola Inc., where he is involved in algorithm design and development for 3GPP LTE mobile systems including modeling of 3GPP LTE modem processor. He is a recipient of the Korean government fellowship for his doctoral studies in the field of electrical engineering and is a senior member of IEEE.

The Synergistic Relationship Between Pervasive Computing, Sensor Networks, and Autonomous Vehicles
Friday, March 05, 2010
Brian Huff

Read More

Hide

Abstract: The purpose of this talk is to discuss the interrelationships that exist between the emerging areas of Pervasive Computing, Sensor Networks, and Autonomous Vehicles. The talk will present the resent activities of UTA's Autonomous Vehicles Laboratory (AVL) and will discuss applications that would appear to be consistent with the research interests of UTA's CReWMaN organization. The talk will touch on the evolution of the enabling technologies that support these promising research areas. A case will be made for the synergistic and symbiotic relationships between sensor network applications and autonomous mobile systems. Mobile systems, both manned and autonomous, need localization assistance and projected sensing capabilities. These services can be provided by Sensor Networks. Sensor Networks require installation, maintenance, and repair. These services can be provided by Autonomous Mobile Platforms. The two systems can be combined to create a network or swarm of WASPs (Wandering Autonomous Sensor Platforms) that are capable of positioning themselves to provide a specific set of services to accomplish a given mission. It is hoped that the presentation will be informal and provide a venue for how AVL and CReWMaN can collaborate in areas of mutual interest.

Biography: Dr. Brian Huff is an Associate Professor of Industrial and Manufacturing Engineering at The University of Texas at Arlington. Dr. Huff also has an extensive research record in the areas of: automated process development, the design and deployment of reconfigurable automation systems, and system capacity analysis using discrete event simulation techniques. Dr. Huff has been very active in building a research laboratory to support the development and deployment of automated manufacturing processes and reconfigurable automation technologies. In 2003, Dr. Huff joined a multi-disciplinary team of UTA Professors (Drs. Dogan (MAE), Reyes (CSE), and Subbarao (MAE)) to revitalize UTA's Autonomous Vehicles Laboratory (AVL). Since this time, the multi-disciplinary team has secured financial support from Bell Helicopter XworX, Lockheed Martin Aero and the Texas Workforce Commission to support research and educational programs associated with autonomous system technologies. Dr. Huff played an instrumental role in the formation of the Lone Star Texas Chapter of the Association of Unmanned Vehicle Systems International (AUVSI). He served as the Founding Vice President for the organization in 2006, has held the position of Academic Relations Chair, is currently serving as Chapter Vice President. The AVL Faculty have established a multi-disciplinary course that provides an interdisciplinary design experience in the area of Autonomous Vehicle Systems Design within the College of Engineering. The AVL has also fielded student teams that represent UTA within AUVSI's International Student Competitions. The in Student Unmanned Aerial System Competition UTA has placed 1st once, and 3rd twice in its five year history in competing in this event. UTA also placed 10th overall in its first year of re-entering the AUVSI Student International Ground Vehicle Competition.

Talk 1: Knowledge acquisition from multimedia documents using evolving ontologies;
Talk 2: Exhibiting affect and adaptivity in human robot interaction: an affective robot guide to Museums

Thursday, March 04, 2010
Vangelis Karkaletsis

Read More

Hide

Abstract: Title 1: Knowledge acquisition from multimedia documents using evolving ontologies

Abstract 1:
Knowledge acquisition is a particularly hard problem, even in the case of text documents. Moving to multimedia documents increases further the difficulty of the task. A counter-argument is that by combining multiple modalities we might be able to increase the performance of single-modality methods. This is what we examined in the context of the EC-funded R&D project BOEMIE (http://www.boemie.org). More specifically, in BOEMIE we examined the use of evolving multimedia ontologies in a synergistic approach that combines multimedia extraction and ontology evolution in a bootstrapping process. This involves the continuous extraction of semantic information from multimedia content in order to populate and enrich the ontologies, and the deployment of these ontologies to enhance the performance of the extraction system. The presentation discusses the project achievements, the open problems and the potential of the proposed approach.

Title 2: Exhibiting affect and adaptivity in human robot interaction: an affective robot guide to Museums

Abstract 2:
The basic goal of human robot interaction is to establish an effective communication between the two parties. In particular, robot emotion, speech, and facial expressions determine the way humans regard the robot, and they are deemed as essential for a natural form of communication. Addressing those issues is the focal point of this presentation, while as a test-bed we deployed a robot platform in a museum, where it serves as a guide to visitors. This work is performed in the context of the INDIGO R&D project (http://www.ics.forth.gr/indigo/). INDIGO pursued the development of human-robot communication technology for intelligent mobile robots that operate in populated environments. The project addressed this issue from two sides: by enabling robots to correctly perceive and understand natural human behavior and by making them act in ways that are familiar to humans. The results of INDIGO were demonstrated in a museum guide use-case.

Biography: Vangelis Karkaletsis has substantial experience in the field of Language and Knowledge Engineering, applied to content analysis, data fusion from multimedia content, ontology engineering, multilingual generation, personalization. He has been involved in several national and international RTD projects. He was the coordinator of the DG-SANCO project MedIEQ, technical manager of the SIAP project QUATRO Plus, coordinator of the national project OntoSum, responsible for the textual content analysis and the ontology learning tasks in the FP6 project BOEMIE on multimedia information extraction, managed for NCSR the FP6 project INDIGO on human-robot interaction. He is Research Director at NCSR'D' and head of the Software & Knowledge Engineering Lab. He has organised or has been committee member of many workshops and conferences and was the local Chair of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09), held in Athens. He is currently the co-chair of the 6th Hellenic Conference on Artificial Intelligence (SETN-2010). He has served for many years in the board of the Greek Artificial Intelligence Society (EETN), the last two (2006-2008) as a vice-chair. He is co-founder of the spin-off company 'i-sieve' technologies. He has published over 100 articles.

Talk 1: Audio-visual automatic speech recognition and related bimodal speech technologies: A review of the state-of-the-art and open problems;
Talk 2:Far-Field Multimodal Speech Processing and C

Thursday, March 04, 2010
Gerasimos Potamianos

Read More

Hide

Abstract: Title 1: Audio-visual automatic speech recognition and related bimodal speech technologies: A review of the state-of-the-art and open problems

Abstract 1:
The presentation will provide an overview of the main research achievements and the state-of-the-art in the area of audio-visual speech processing, mainly focusing in the area of audio-visual automatic speech recognition. The topic has been of interest in the speech research community due to the potential of increased robustness to acoustic noise that the visual modality holds. Nevertheless, significant challenges remain that have hindered practical applications of the technology - most notably difficulties with visual speech information extraction and audio-visual fusion algorithms that remain robust to the audio-visual environment variability inherent in practical, unconstrained interaction scenarios and audio-visual data sources, for example multi-party interaction in smart spaces, broadcast news, etc. These challenges are also shared across a number of interesting audio-visual speech technologies beyond the core speech recognition problem, where the visual modality has the potential to resolve ambiguity inherent in the audio signal alone; for example, speech enhancement, speech activity detection, speaker recognition, and others.

Title 2: Far-Field Multimodal Speech Processing and Conversational Interaction in Smart Spaces

Abstract 2:
Robust speech processing constitutes a crucial component in the development of usable and natural conversational interfaces. In this paper we are particularly interested in human-computer interaction taking place in "smart" spaces - equipped with a number of far-field, unobtrusive microphones and camera sensors. Their availability allows multi-sensory and multi-modal processing, thus improving robustness of speech-based perception technologies in a number of scenarios of interest, for example lectures and meetings held inside smart conference rooms, or interaction with domotic devices in smart homes. In this talk, we overview related work in developing state-of-the-art speech technology in smart spaces. In particular we discuss acoustic scene analysis, speech activity detection, speaker diarization, and speech recognition, emphasizing multi-sensory or multi-modal processing. The resulting technology is envisaged to allow far-field conversational interaction in smart spaces based on dialog management and natural language understanding of user requests.

Biography: Gerasimos Potamianos received the Ph.D. degree in Electrical and Computer Engineering from the Johns Hopkins University, in Baltimore, Maryland in 1994. Since then, he has worked in the U.S. at the Center for Language and Speech Processing at Johns Hopkins, at AT&T Labs-Research, and at the IBM T.J. Watson Research Center, and in Greece at the Institute of Computer Science (ICS) at FORTH, Crete. He currently is a Research Director at the Institute of Informatics and Telecommunications at the National Centre for Scientific Research (NCSR) "Demokritos", in Athens, Greece. His research interests span the areas of multimodal speech processing with applications to human-computer interaction and ambient intelligence, with particular emphasis on audio-visual speech processing, automatic speech recognition, and multimedia signal processing and fusion. He has published over 90 articles in these areas and holds seven US Patents.

Architecting Robust Microprocessor in Light of Small-Scale Processing Technology
Wednesday, March 03, 2010
Xin Fu

Read More

Hide

Abstract: With the continuous down-scaling of CMOS processing technology, computer architects have built the high-performance and low-power many/multi-core processors. On the other hand, as the processing technology pushes towards nano-scale, silicon reliability becomes one of the most important challenges in the design and fabrication of future microprocessors. The failure mechanisms could significantly degrade chip reliability and lifetime, and result in large economic losses. It is imperative to build a reliable processor while achieving an optimal trade-off among performance, reliability and power. In this talk, I will introduce several emerging failure mechanisms (e.g. soft error, negative bias temperature instability (NBTI), and process variation) and present three processor vulnerability-mitigation methodologies. The first methodology characterizes and mitigates the processor microarchitecture soft-error vulnerability in the presence of process variation. I will describe two techniques working at fine and coarse grain levels to efficiently improve the processor soft-error robustness. The second methodology observes the positive interplay between two failure mechanisms (NBTI and process variation) and intelligently leverages this interaction to tolerate their detrimental impact on reliability. The third methodology presented in this talk targets on hierarchically mitigating the NBTI and process variation effects on network-on-chip, a crucial hardware component in future many/multi-core processors.

Biography: Dr. Fu is a Computing Innovation Fellow (supported by the Computing Community Consortium and the Computing Research Association, with funding from the National Science Foundation) at the Department of Computer Science, University of Illinois at Urbana-Champaign. She received her B.E. in Computer Science and Technology from Central South University, China in July 2003, and her Ph.D. in Computer Engineering from the University of Florida in August 2009. After her Ph.D., she was named as 2009 Computing Innovation Fellow and joined the SWAT (Software Anomaly Treatment) research project led by Professor Sarita Adve at Illinois. Her research interests include multi-core computer architecture, processor microarchitecture, reliability, nano-scale technology scaling, variability, and on-chip interconnection network. She is a member of the IEEE and the ACM. More details about her research are available at: http://rsim.cs.illinois.edu/~xfu/

TBA
Friday, February 26, 2010
Dimitrios Kosmopoulos

Read More

Hide

Abstract: This talk introduces some challenging problems in computer and robot vision. It includes (a) robust behavior understanding of humans in industrial environments using holistic features and multiple cameras (b) tracking of moving targets under occlusions using a hierarchical approach, (c) unloading of unstructured piles of objects using 3D vision. The research that will be presented has been performed in the framework of several EU and national projects.

Biography: Dimitrios Kosmopoulos received B. Eng. degree in Electrical and Computer Engineering from the National Technical University of Athens in 1997 and the PhD degree from the same institution in 2002. Dimitrios Kosmopoulos currently is a research scientist in the Computational Intelligence Laboratory of the Institute of Informatics and Telecommunications in theNational Center for Scientific Research “Demokritos” in Athens – Greece. He is also adjunct Assistant Professor in the University of Central Greece and adjunct assistant professor in the Technical Educational Institute of Athens. His research interests include computer and robotic vision and pattern recognition.

Image-based Robot Control: the Multiple View Geometry Approach
Thursday, February 25, 2010
Gian-Luca Mariottini

Read More

Hide

Abstract: Autonomous robots have the potential to enormously impact society by improving the quality of life in a variety of ways, ranging from elderly care to robotics-aided surgery. In all of these fields, vision sensors are of great interest: cameras are less expensive than lasers and sonars, and can provide richer and non-contact measurements of the surroundings. However, vision is still not used as the main (or alone) on-board sensor, but jointly with others (e.g., inertial measurement units, lasers, etc.) which often requires performing initial, off-line and time-consumin multi-sensor calibration procedures.

In this talk I will present my research in the field of autonomous navigation for a mobile robot equipped solely with an on-board camera (pinhole or omnidirectional). I will present a control strategy that uses multiple-view geometry features for the real-time control of the robot towards a goal configuration (specified exclusively by a target image). Our approach is able to avoid typical problems such as local minima or Jacobian singularities, typically encountered in standard image-based visual servoing control schemes. Another key advantage is that our strategy does not need any metrical knowledge of the 3-D observed scene. Moreover, and differently from state-of-the-art 2-1/2-D techniques, our algorithm does not need to estimate the relative pose between successive views (e.g., via homography matrix decomposition). This process, that would in fact require manual intervention, could also increase the sensitivity to image noise. Asymptotic convergence to the desired robot configuration has been proved, also in the case of unknown focal length. I will present experimental results together with recent extension of this work to the case of navigating in large outdoor environments.

Biography: Dr. Gian Luca Mariottini received his Master Degree summa cum laude in Computer Science and the Ph.D. degree from the University of Siena, Siena, Italy. He has been a visiting scientist/post-doc at the University of Pennsylvania, Georgia Tech, and University of Minnesota.

Semantic Based Substitution of Unsupported Access Points on Library Searching
Wednesday, February 24, 2010
Sarantos Kapidakis

Read More

Hide

Abstract: Meta-searching library communities involve access to sources where metadata are invisible behind query interfaces. Many of the query interfaces utilize predefined abstract Access Points for the implementation of the search services, without any further access to the underlining meta-data and query methods. The unsupported Access Points and their consequences, either query failures or inconsistent answers, is the main issue when meta-searching this kind of systems. We will present the zSAPN (Z39.50 Semantic Access Point Network), a system which improves the search consistency and decreases the query failures exploiting the semantic information of the Access Points from an RDFS description.

Biography: Sarantos Kapidakis is Professor of the Department of Archives and Library Sciences, at the Ionian University at Corfu, Greece, and director of the Laboratory on Digital Libraries and Electronic Publishing. He received his Ph.D. degree in Computer Science from Princeton University in 1990 and holds an MSc. from Princeton University and a Diploma in Electrical Engineering from National Technical University of Athens. He has worked and received funding on many European projects, including the National Documentation Centre in Greece, MIT, the University of Crete and with the Foundation for Research and Technology of Greece. He has chaired numerous international committees in digital libraries and chaired the European Conference on Digital Libraries (ECDL). His current research interests include new methods for the representation and preservation of scientific, environmental and research data.

Sense-through-Foliage Target Detection and Channel Modeling: Where Science meets Art
Friday, February 19, 2010
Qilian Liang

Read More

Hide

Abstract: In this talk, sense-through-foliage target detection and channel modeling using ultra-wideband (UWB) radars will be presented. We will propose a Discrete-Cosine-Transform (DCT)-based approach for sense-through-foliage target detection using a single UWB radar when the echo signal quality is good, and a Radar Sensor Network (RSN) and DCT-based approach when the echo signal quality is poor. A RAKE structure which can combine the echos from different cluster-members will be proposed for cluster head in the RSN. We will compare our approach with the ideal case when both echos are available, i.e., echos with target and without target. We will also compare our approach against the scheme in which 2-D image is created via adding voltages with the appropriate time offset as well as the matched filter-based approach. We observe that the matched filter-based could not work well because the UWB channel has memory. We will apply two approaches to the sense-through-foliage channel modeling: Saleh and Valenzuela (S-V) method for UWB channel modeling and CLEAN method for narrowband and UWB channel modeling. Finally, we will demonstrate that for large-scale fading using path-loss and log-normal shadowing model for foliage environment, the path-loss exponent is very high due to rich scattering.

Biography: Dr. Qilian Liang is an Associate Professor at the Department of Electrical Engineering, University of Texas at Arlington. He received his B.S. degree from Wuhan University in 1993, M.S. degree from Beijing University of Posts and Telecommunications in 1996, and Ph.D degree from University of Southern California (USC) in May 2000, all in Electrical Engineering. Prior to joining UTA in August 2002, he was a Member of Technical Staff in Hughes Network Systems Inc in San Diego, California. His research interests include wireless sensor networks, radar and sonar sensor networks, wireless communications, communication system and communication theory, signal processing for communications, fuzzy logic systems and applications. Dr. Liang has published more than 160 journal and conference papers and 7 book chapters. He received 2002 IEEE Transactions on Fuzzy Systems Outstanding Paper Award, 2003 U.S. Office of Naval Research (ONR) Young Investigator Award, 2005 UTA College of Engineering Outstanding Young Faculty Award, and 2007 and 2009 U.S. Air Force Summer Faculty Fellowship Program Award.

Connectivity and Security in Directional Multimedia Sensor Networks
Friday, February 19, 2010
Deepa Kundur

Read More

Hide

Abstract: Recently, there has been increased interest in the development of
untethered sensor nodes that communicate directionally via
directional radio frequency (RF) or free space optical (FSO)
communications. Directional wireless sensor networks, such as the
original Smart Dust proposal that employs broad-beamed FSO communications have the potential to provide gigabits per second speeds for relatively low power consumption suitable for multimedia sensing systems. Two significant challenges shared by the class of directional networks are connectivity and routing security, especially for random deployments. In this talk we study the feasibility of employing directional communications paradigms in large-scale security-aware broadband randomly and rapidly deployed static multimedia sensor networks. We investigate the implications of link directionality to network connectivity and secure ad hoc multihop routing and highlight approaches in network design to mitigate compromising between the two.

Biography: Deepa Kundur is an Associate Professor in the Department of Electrical and Computer Engineering at Texas A&M University. A native of Toronto, Canada, she received the B.A.Sc., M.A.Sc. and Ph.D degrees all in Electrical and Computer Engineering from the University of Toronto in 1993, 1995 and 1999, respectively. Her research interests include cyber security of the electric smart grid, connectivity and security of directional link networks, security and privacy of sensor and social networks, information forensics, and multimedia security.

Risk Models with Extremal Subexponentiality
Thursday, February 11, 2010
Dimitrios G. Konstantinides

Read More

Hide

Abstract: If the risk models cannot provide help for solving the problems of the insurance business, they could not be characterized as adequate. In this paper a parametric aspect of the subexponential class of distributions with at least two parameters is considered. In connection with the classical risk models with the corresponding claim size distributions, the relation between the parameters is chosen in such a way that the safety loading remains fixed. Considering a proper convergence of the parameters, such that the tail of the claim size distribution becomes heavier we explore the situation of extreme heavytailedness and extreme lighttailedness in the presence of subexponentiality. We concentrate our attention on the corresponding behavior of the ruin probability in the classical risk model. We choose a proper convergence of a parameter, that makes the tail of the claims distribution heavier or lighter and then tend to its limitation. Finally we proceed to an appropriate functional normalization in order to keep the distributional properties.

Biography: Dr. Dimitrios G. Konstantinide is Associate Professor in the Department of Statistics and Actuarial - Financial Mathematics at the University of the Aegean. He received his Ph.D. in Physics and Mathematics from Moscow State University in 1990. Between 1992-1998, he worked as adjunct Professor at Technical University of Crete where he taught in the Department of Electronic and Computer Engineering and the Department of Production Engineering and Management. In 1998-2001, he worked as adjunct Professor at the University of the Aegean in the Department of Mathematics. From 2001 he works at the University of the Aegean in the Department of Statistics and Actuarial - Financial Mathematics. He is author of 4 textbooks and 17 peer-reviewed research publications. He is member of several journal editorial boards and organizer of the conference on Actuarial Science and Finance on Samos.

The Probabilities of Absolute Ruin in the Renewal Risk Model with Constant Force of Interest
Wednesday, February 10, 2010
Dimitrios G. Konstantinides

Read More

Hide

Abstract: In this presentation we consider the probabilities of finite- and infinite-time absolute ruin in the renewal risk model with constant premium rate and constant force of interest. In the particular case of compound Poisson model, explicit asymptotic expressions for the finite- and infinite-time absolute ruin probabilities are given. For the general renewal risk model, we present an asymptotic expression for the infinite-time absolute ruin probability. Conditional distributions of Poisson processes and probabilistic techniques regarding randomly weighted sums are employed in the course of this study.

Biography: Dr. Dimitrios G. Konstantinides is Associate Professor in the Department of Statistics and Actuarial - Financial Mathematics at the University of the Aegean. He received his Ph.D. in Physics and Mathematics from Moscow State University in 1990. Between 1992-1998, he worked as adjunct Professor at Technical University of Crete where he taught in the Department of Electronic and Computer Engineering and the Department of Production Engineering and Management. In 1998-2001, he worked as adjunct Professor at the University of the Aegean in the Department of Mathematics. From 2001 he works at the University of the Aegean in the Department of Statistics and Actuarial - Financial Mathematics. He is author of 4 textbooks and 17 peer-reviewed research publications. He is member of several journal editorial boards and organizer of the conference on Actuarial Science and Finance on Samos.

Privacy Disclosure in Online Social Networks: Are You Worried?
Friday, February 05, 2010
Na Li

Read More

Hide

Abstract: Many online social networks (e.g., FaceBook or LinkedIn) regularly publish their topology for research and advertising purposes. Such publishing process, however, may incur significant privacy concerns over sensitive information that many users may not be willing to disclose. This seminar will first present a brief yet systemic review of the existing work on anonymization techniques against the violation of privacy for republishing social network data.
Next a challenging problem regarding link privacy concern will be defined followed by novel solutions along with theoretical and experimental analysis.

Biography: Ms. Na Li is a PhD student in the Department of Computer Science and Engineering at the University of Texas at Arlington. She is also a member of the Center for Research in Wireless Mobility and Networking (CReWMaN). She received her B.S. degree in Computer Science from Nan Kai University, Tianjin, China. Before joining CReWMaN, she worked as a research assistant in Computer Network Information Center, Chinese Academy of Sciences, Beijing, China. Her current research interests include privacy preservation in challenging networks, such as republishing online social network data and protecting location privacy in wireless sensor networks.

Networking for Life
Wednesday, February 03, 2010
John Humphrey

Read More

Hide

Abstract: Is networking critical to your business development success?
Do you have a strategy for leveraging your network to accelerate your business growth?
Can you track which business development activities and initiatives produce sales?

The companies that succeed in today's complex selling environment employ individuals who harness the power of networking. It's not necessarily what you know, it's who you know and who knows you. Networking for Life is not a new idea. Like most ideas, the benefits are found in the execution of the idea.

In this briefing, we will discuss how to unlock the value of lifelong business relationships by employing a successful networking strategy. We will see how one company has successfully leveraged Microsoft CRM, a customized networking management tool and a sales information portal to shorten cycles associated with growing their business.

John Humphrey, co-founder of Pariveda Solutions, will discuss how his company has doubled revenues for each of the past three years by employing the Networking for Life approach combined with technology to implement and manage the strategy.

The Benefits
* Gain an understanding of Networking for Life and what it can do for you and your business
* Understand the business case for implementing this type of solution
* Understand how to use Microsoft's CRM product along with business intelligence and mobile solutions to attain real business value with a strategy such as Networking for Life
* Network with your peers and like-minded individuals

Biography: Overview
As Co-Founder and Chairman of the Board of Pariveda Solutions, Inc., John has contributed to the success of the company through driving operational efficiencies and sales effectiveness. Since the company does not employ a direct sales force, a large percentage of John's time is spent coaching and teaching basic networking and sales techniques. His "Networking for Life" topic is sought after by organizations throughout the U.S. This topic, often referred to as "Unlocking the Value of Life Long Relationships" has been presented at various Chamber meetings, private company meetings and college campuses.

Experience
Mr. John Humphrey has over twenty years of experience in business operations and technology working around sales strategy, marketing and application software. Mr. Humphrey's primary area of expertise is enterprise applications, sales effectiveness and software procurement methods. He has authored several courses and workshops helping services and software companies with their methodologies to improve their sales effectiveness. Mr. Humphrey has also taken these skills and applied them to large IT Enterprises to assist them in selling value inside of their own organization. He has extensive background in both selling and implementing Enterprise Resource Planning (ERP) systems with companies like Lawson and Oracle; and his experience with Ariba gave him deep experience in the
procurement arena. While at Tactica Technology Group, he built an Oracle implementation practice that was profitable within the first year of operation. At Andersen Consulting, John's expertise was around loan origination and credit analysis in commercial banking.

Mr. Humphrey has created a software selection methodology that goes beyond traditional methods and assists our clients in the negotiation and details of both pricing and contracts. He has authored several White Papers titled "Sales and the Art of War" related to software selection and egotiation.
With his extensive background in both implementing and selling enterprise-wide solutions, Mr. Humphrey applies these skills to assist our clients in making the best possible technology choices and then ensures they get an economically viable solution.

He continues to support several distribution companies with their technology planning and IT support. He has contributed to the success of his clients by solving complex problems and delivering leading business and technical solutions. Mr. Humphrey's clients include middle market and Fortune 500 companies, particularly distributors, service companies and financial
services companies.

Out of the Office
John is an avid roller blader and can be found in River Legacy Park pounding out the miles. He loves boating, water skiing and tubing (at least pulling them) and he seeks out the ocean at every possible opportunity. John has been married to Laurel for over 20 years and has two sons who are actively engaged in athletics.

Education and Service
John is on the Board of the IT Round Table, a national networking organization and serves on the Board at Pantego Christian Academy, a K-12 college preparatory school in South Arlington. More and more, he is found speaking to organizations about relationship building and using the newer
Web 2.0 technologies to maintain touch with life-long friends. Mr. Humphrey received a B.A. in Economics and a B.B.A. in Finance from Southern Methodist University in 1984. Mr. Humphrey also received an M.B.A. from The Cox School of Business at Southern Methodist University in 1990.

SMOOTH MIGRATION TO UNIFIED ALL IP NETWORKS
Friday, January 29, 2010
Yixin (James) Zhu

Read More

Hide

Abstract: Today's service providers can have a diverse mix of wireless and wireline core and access voice networks based in IP and TDM technologies. Migrating these networks to efficiently deliver the increasingly content and video-centric services that end-users want across all access types and devices is a key challenge. The industry is converging on the idea of an all-IP common service core network, but how do we get there? A flexible switching and services platform can significantly help to resolve this challenge.
This talk will show how a common, secure platform enables the Universal Connectivity of voice, data, and video communications across all networks, including answers to the following questions:

* How can TDM core and access networks cost-effectively migrate to IP by using a
convergence platform?
* What are some of the key steps in network migration and what are common capabilities required for these steps?
* How can important new services like Video Telephony and Multimedia Content delivery be enabled as part of a smooth network migration?
* How can key technologies like Multimedia Transcoding, Encryption, and Femtocells make the migration efficient and fast while generating new revenue opportunities along the path to an ultimate all-IP solution?

Biography: Dr. Yixin (James) Zhu has spent over 15 years in the telecom industry, with diverse experience in systems engineering, technical marketing, product management, business development, partnership management and sales in leading telecommunication companies such as Nortel Networks, Qualcomm Inc, Tekelec Inc and Start-ups such as Santera systems, Genband Inc. etc. Currently, he is the VP of sales for China at Genband Inc responsible for all sales activities (Channel, reseller and direct) in China.

Prior to telecom industry, Dr. Zhu was an assistant professor at the department of Industrial Engineering at SUNY Buffalo from 1990-1994, visiting assistant professor at George Washington University from 1989-1990. Dr. Zhu has published more than 20 papers in leading academic journals and has 3 patents.

Dr. Zhu graduated with B.S. degree in Mathematics from Fudan University in 1982; M.S. and Ph.D. degrees in Operations Research from Cornell University in 1987 and 1989 respectively.

Computational Thinking
Wednesday, January 20, 2010
Jeannette Wing

Read More

Hide

Abstract: My vision for the 21st Century: Computational thinking will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, we should add computational thinking to every child's analytical ability. Computational thinking involves solving problems, designing systems, and understanding human behavior by drawing on the concepts fundamental to computer science. Thinking like a computer scientist means more than being able to program a computer. It requires the ability to abstract and thus to think at multiple levels of abstraction. In this talk I will give many examples of computational thinking, argue that it has already influenced other disciplines, and promote the idea that teaching computational thinking can not only inspire future generations to enter the field of computer science but benefit people in all fields.

Biography: Dr. Jeannette M. Wing is the President's Professor of Computer Science in the Computer Science Department at Carnegie Mellon University. She received her S.B.,S.M. degrees, and Ph.D. in Computer Science, all from the Massachusetts Institute of Technology. From 2004-2007, she was Head of the Computer Science Department at Carnegie Mellon. Currently on leave from CMU, she is the Assistant Director of the Computer and Information Science and Engineering Directorate at the National Science Foundation. Professor Wing's general research interests are in the areas of specification and verification, concurrent and distributed systems, programming languages, and software engineering. Her current focus is on the foundations of trustworthy computing. Professor Wing has been or is on the editorial board of twelve journals. She has been a member of many national and industrial advisory boards. She is a member of AAAS, ACM, IEEE, Sigma Xi, Phi Beta Kappa, Tau Beta Pi, and Eta Kappa Nu. Professor Wing is an AAAS Fellow, ACM Fellow, and IEEE Fellow.

Linear-time matching of Position Weight Matrices
Tuesday, November 24, 2009
Nikola Stojanovic

Read More

Hide

Abstract: Position Weight Matrices (PWMs) are a popular way of representing variable motifs in genomic sequences. Literally every currently used database containing protein binding and DNA sequence signal information stores this information in the form of PWMs, either exclusively or in combination with other forms of representation. Consequently, PWM matching became the principal mechanism for mining the information contained in these databases. Whereas not inefficient on shorter sequences, the current implementations of PWM matching are too expensive for whole-genome searches, which is now a performance bottleneck in today's genomics.

After an introduction to PWMs and their applications, in this talk we shall present an algorithm we have developed for their efficient matching in long target sequences. After the initial pre-processing of the matrix our method performs in time linear to the size of the genomic segment, which makes it suitable for application on entire chromosomes, and even complete genomes.

Biography: Nikola Stojanovic is an Assistant Professor at the Computer Science and Engineering department at UTA. He has received his BS degree in Mathematics from the University of Belgrade, Yugoslavia, and a PhD in Computer Science and Engineering from Pennsylvania State University.

Engineering and Science in Five Dimensions
Wednesday, November 18, 2009
Chris Greer

Read More

Hide

Abstract: The digital dimension consists of network connectivity that can lower conventional barriers to participation and interaction of time and place; computational capacity and capability to expand the possible and extend the conceivable; and information discovery, integration, and analysis capabilities to drive innovation. The emergence and continuing evolution of this powerful new dimension is reshaping science, just as it is recasting business, government, education, and many other aspects of human activity worldwide. To lead in the emerging global digital information society, the nation must fully embrace the digital dimension - expanding access, extending capabilities, and building on the potential of this exciting new environment.

Biography: Dr. Chris Greer is the assistant director of Information Technology R&D in the White House Office of Science and Technology Policy. Dr. Greer received his Ph.D. in biochemistry from the University of California, Berkeley and did his postdoctoral work at CalTech. Dr. Greer was a member of the faculty at the University of California at Irvine in the Department of Biological Chemistry for approximately 18 years where his research on gene expression pathways was supported by grants from the NSF, NIH, and the American Heart Association. During that time, he was founding Executive Officer of the RNA Society, an international professional organization with more than 700 members from 21 countries worldwide.

NOTE: RSVP by November 11 to 817.272.0074

Pay-as-you-drive applications: privacy implications and possible solutions
Tuesday, November 17, 2009
Carmela Troncoso

Read More

Hide

Abstract: Pay-as-you-drive (PAYD) applications are becoming part of our daily lives as an increasing number of car insurance companies are offering PAYD discount programs to their customers and governments start to consider PAYD as the future for tax collection. The European Electronic Toll Service, a toll system in which citizens pay taxes depending on how much they use the roads, will become a reality within five years as required by the European Union. In this talk we will describe Pay-as-you-drive services and consider their privacy implications and the legal issues that arise from their deployment in the framework of the European Data Protection Directive. Then we will discuss possible privacy-friendly implementations for PAYD, including our own proposed PriPAYD system, and their respective pros and cons.

Biography: Carmela Troncoso earned an M.Sc. degree in Telecommunications Engineering in 2006 from the University of Vigo, Spain. She is currently a Ph.D. student and researcher in the COSIC group in the Department of Electrical Engineering (ESAT) at K.U. Leuven, Belgium. Her research focuses on Privacy Enhancing Technologies and, in particular, anonymous communications and location privacy.

Network Optimization in Wireless Core Networks
Friday, November 13, 2009
Jing Wang

Read More

Hide

Abstract: The network optimization problems in wireless core networks are motivated by the telecommunication operators' needs to achieve more coverage area, greater capacity of the network and better quality of services to their customers. As the core networks are evolving from traditional voice-based services to data-based services, the optimization problems get even more complicated considering mixed types of traffics and the corresponding equipments to support the traffics.

In this talk, we will firstly discuss the background of the emerging wireless core networks along with the problem models for network optimization. After that, we will investigate several typical cases of network optimization in wireless core networks, including capital planning and border optimization. Then, we will focus on the optimal solutions to the previous problem models. Finally, the performance evaluation metrics will be discussed accordingly.

Biography: Ms. Jing Wang joined UTA in 2006 and is currently pursuing Ph. D degree in the Department of Computer Science and Engineering. She obtained her B.S. and M.S. degrees in electrical engineering from Xi'an Jiaotong University, China, in 1998 and 2001 respectively. Her research interests are wireless sensor networks and pervasive computing. She is now working on network optimization projects funded by Cerion Inc.

Reliable Data Collection in Wireless Sensor Networks with Mobile Elements: Research Challenges
Friday, November 06, 2009
Mario Di Francesco

Read More

Hide

Abstract: Wireless sensor networks (WSNs) have emerged as an effective solution for a wide range of applications. The traditional WSN architecture consists of static nodes which are densely deployed over a sensing area. More recently, WSN architectures exploiting mobile elements (MEs) have been proposed. They take advantage of sensor node mobility to address the problem of data collection in WSNs. To this end, new solution approaches to various networking problems are needed.

In this talk we will first define WSNs with MEs and provide a taxonomy of various architectures proposed in the literature. Then we will identify the main issues and challenges related to WSNs with MEs. Furthermore, we will focus on approaches for energy-efficient and reliable data collection in such scenarios. We will conclude the talk by considering representative case studies for both dense and sparse WSNs.

Biography: Dr. Mario Di Francesco is a Research Associate in the Department of Computer Science and Engineering at the University of Texas at Arlington. He is also member of the Center for Research in Wireless Mobility and Networking (CReWMaN). He received his PhD from the Department of Information Engineering at the University of Pisa (Italy) in May 2009. He has been a visiting scholar at CReMWaN during Fall 2008. He also was a research fellow at the Real Time Systems Lab (RETIS) of the Scuola Superiore S. Anna in Pisa (Italy). His research interests include performance evaluation and design of adaptive algorithms for wireless sensor networks.

Hybrid Computational and Experimental Approaches to Signaling Regulation at Many Scales
Wednesday, November 04, 2009
Marc Turcotte

Read More

Hide

Abstract: More and more, following successes in other sciences, a theoretically-based approach has begun to define a role in biology. Thus, much beyond application of ever more sophisticated analysis techniques to understand data, systems biology begins to offer a uniquely powerful global model-based strategy to understand overwhelmingly unintuitive aspects of biological phenomena that are rooted in the inescapable facts of the nonlinearity and stochasticity of many complex biological systems. In this talk, I will show how using a mathematically grounded computational approach anchored in experimental data a hybrid approach- leads to otherwise unattainable insight into detailed signaling regulation, at multiple resolution scales. I will show examples from detailed G protein signaling regulation, from the subtle intricacies of signaling pathway interactions in mammalian cells, from the signaling underlying cancer onset and maintenance and from stochastic regulation of gene expression in bacteria underlying the phenomenon of competence. I will discuss this last topic more describing a new ongoing project that relies on a math-based computational approach, to recapitulate evolutionary choices in archetypical gene regulation network topologies. I will discuss how this is expected to feedback into the laboratory.

Biography: Dr. Turcotte holds a PhD in physics from McGill University. He has originally contributed extensively to research in particle physics, before permanently switching his research focus to biology. For the last five years, as a recipient of the K25 NIH Career Transition Award and a faculty member in the Pharmacology Department at UT Southwestern, Dr. Turcotte has collaborated with a number of experimenters in Pharmacology and also in the Green Center for Systems Biology at UTSW. His focus has been modeling, simulation and analysis of cell signaling processes in mammalian cells and reconstituted systems, and stochastic gene regulation in bacteria. His approach is based on the use of applied mathematics blended with computer science, in a hybrid computational/simulation and experimental strategy, to advance the understanding of key biological processes, primarily in signaling and regulation. These processes are governed by nonlinear dynamics and are too un-intuitive to understand without the guidance of a theoretical framework. In addition to cell signaling, Dr. Turcotte's focus includes using computational simulations of the nonlinear dynamics of excitable stochastic gene regulation circuits to investigate the role of stochastics in evolutionary selection. This is the focus of his recently funded NSF project. Dr. Turcotte's blend of theory and computations to guide experimentation is typical of the new breed of system biologists formed in quantitative and analytic sciences who increasing rely on theory supported by massive computations, to not only provide new answers, but to suggest new questions and thus increasingly shift the focus in biology from determining the "how" to determining the "why".

Providing Voice Connectivity using WiMAX from the Perspective of Emerging Markets: Issues and Solutions
Friday, October 30, 2009
Mayank Raj

Read More

Hide

Abstract: Connectivity is vital for socio-economic growth of any country, particularly for developing nations. In emerging telecom markets like India and China, WiMAX is being looked as a broadband access solution ahead of LTE and other competing technologies due its long range communication and high bandwidth. We will discuss a kiosk based WiMAX infrastructure model to provide voice connectivity to rural Indian villages. The novelty of such infrastructure model lies in its low deployment cost from service provider's perspective, and almost negligible equipment cost for the end user. In order to make the kiosk based model sustainable, we will discuss novel architectural solutions for energy efficiency and capacity enhancement for WiMAX enabled devices. Experimental results will also be presented to validate the proposed concepts.

Biography: Mayank Raj is a PhD student in the Department of Computer Science and Engineering at the University of Texas at Arlington. He is also a member of the Center for Research in Wireless Mobility and Networking (CReWMaN). He received is M.Tech. degree in Information Technology from IIIT-Bangalore. Prior to joining CReWMaN. he held research positions at Motorola India Research Lab, Applied Research Group (Satyam) and IIIT-Banaglore. His current research interests include broadband wireless networks, next generation mobile networks, power-line communication, network modeling and analysis.

A Game Theoretic Framework for Cognitive Radio Networks.
Friday, October 23, 2009
Vanessa Gardellin

Read More

Hide

Abstract: In recent years there has been a huge proliferation of wireless applications and services which operate in the unlicensed spectrum band, resulting in the so-called spectrum overcrowding. In contrast, a careful analysis conducted by the Federal Communications Commission (FCC) shows that most of the licensed bands are surprisingly underutilized. Cognitive Radio (CR) has emerged as the key enabling technology to address the spectrum shortage problem. In fact, CRs have the ability to sense the external environment, learn from history, and make intelligent decisions to adjust their transmission parameters and create opportunities for a more aggressive spectrum reuse.

In this talk we will present a novel game theoretical framework that takes advantage of the new IEEE 802.22 Standard to guarantee self-coexistence among Wireless Regional Area Networks (WRANs). We will address the self-coexistence problem as a channel assignment problem where each WRAN acquires a channel in a dynamic and distributed way. We formulate the channel assignment problem as a multi-player non-cooperative repeated game and demnstrate that this potential game converges to a Nash Equilibrium point. Experimental results will also be presented.

Biography: Ms. Vanessa Gardellin is a Ph.D. Student in the Department of Information Engineering, University of Pisa, Italy. She received her Master's Degree from the same Department in 2007. Since January 2009, she has been a Visiting Researcher at CReWMaN Lab at the University of Texas at Arlington. Her research interests include channel assignment in wireless mesh networks, cognitive radios, game theory, network simulation and performance evaluation.

Similarity Measures and Indexing Methods for Multimedia Databases
Wednesday, October 21, 2009
Vassilis Athitsos

Read More

Hide

Abstract: Similarity-based retrieval is the task of identifying database patterns that are the most similar to a query pattern. Retrieving similar patterns is a necessary component of many practical applications, in fields as diverse as computer vision, bioinformatics, and speech/audio processing. This talk presents three methods that we have recently introduced for improving retrieval accuracy and/or efficiency in multimedia databases.

The first method, called reference-based subsequence matching, is used to find optimal subsequence matches in databases of strings under the edit distance or Smith-Waterman, as well as in databases of time series under the dynamic time warping distance measure. The second method is useful for efficient retrieval of database vectors that maximize the dot product with a query vector, and is applied to speed up classification in domains with a very large number of classes. The third method is a novel similarity measure for gestures, called Dynamic Space-Time Warping (DSTW). DSTW is explicitly designed for gesture recognition in complex scenes, where users are not constrained to wear specific clothes, and where the background can contain multiple other people or moving objects.

Experimental results illustrate the advantages of these methods in several application domains, including similarity search in DNA databases, face recognition, and sign language recognition.

Biography: Dr. Vassilis Athitsos received the BS degree in mathematics from the University of Chicago in 1995, the MS degree in computer science from the University of Chicago in 1997, and the PhD degree in computer science from Boston University in 2006. In 2005-2006 he worked as a researcher at Siemens Corporate Research, developing methods for database-guided medical image analysis. In 2006-2007 he was a postdoctoral research associate at the Computer Science department at Boston University. Since August 2007 he is an assistant professor at the Computer Science and Engineering department at the University of Texas at Arlington. His research interests include computer vision, machine learning, and data mining. His recent work has focused on efficient similarity-based retrieval, gesture and sign language recognition, shape modeling and detection, subsequence matching for strings and time series, and efficient classification of a large number of classes.

A Novel Localization Protocol for Wireless Sensor and Actor Networks
Friday, October 16, 2009
Giacomo Ghidini

Read More

Hide

Abstract: We consider a wireless sensor and actor network (WSAN) consisting of a large number of tiny, low-cost sensors uniformly and independently distributed in a two-dimensional geographical region around a few powerful entities, called actors. To save energy, the sensors operate according to sleep/awake schedules in an asynchronous manner. In this setting, we propose a novel semi-distributed, actor-centric localization algorithm which organizes the sensors in the vicinity of each actor by means of a discrete polar coordinate system. Specifically, each sensor is localized when it acquires the corona and sector coordinates of the region it resides in. To accomplish the localization task, each actor first trains a subset of sensors in its vicinity, which in turn train their neighbors. By modeling the deployed sensors as a two-dimensional Poisson point process and applying well-known results from the Coupon Collector's problem and Chernoff bounds, we derive bounds on the sensor density required to localize with high probability all sensors in the actor's vicinity. Finally, we verify the analytical bounds with results from our simulation experiments.

Biography: Giacomo Ghidini is a PhD student in the Department of Computer Science and Engineering at the University of Texas at Arlington. He is a member of the Center for Research in Wireless Mobility and Networking (CReWMaN). Giacomo received his B. Comp. Eng. and M. Comp. Eng. degrees from the University of Bologna, Italy. He worked on his master thesis during a 6-month visit at CReWMaN on a scholarship of the College of Engineering of the University of Bologna. His current research interests include design and anlysis
of algorithms, architectures, protocols and middlewares for wireless sensor networks.

Manifold Learning Based Feature Extraction Methods and Its Applications
Friday, October 09, 2009
De-Shuang Huang

Read More

Hide

Abstract: Manifold learning is an efficient dimensionality reduction method for nonlinear distributing data, which has been widely applied in many fields such as data visualization, image processing, information indexing and pattern recognition, etc. However, there still exist some problems in them, such as high demands on data sampling, the selection of the nearest neighbors, the robustness to noise and outliers, the estimation of the inherent features, etc. In this talk, two new manifold learning based methods, i.e. locally linear discriminant embedding (LLDE) and constrained maximum variance mapping (CMVM), are presented, which introduce the class information to supervise the feature extraction process. At last a generalized Fisher framework (GFF) is explored to unify the feature extraction methods mentioned above. Moreover, the traditional linear algorithms such as LDA and PCA, some manifold learning approaches can all be regarded as its special case. All the proposed algorithms have been tested by some artificial data and some benchmark data sets. The experiments have validated their efficiency.

Biography: De-Shuang Huang received the Ph.D degree in Institute of Electronic Engineering from Xidian University, Xian, China, in 1993. During 1993-1997 period he was a postdoctoral research fellow respectively in Beijing Institute of Technology and in National Key Laboratory of Pattern Recognition, Chinese Academy of Sciences, Beijing, China. In Sept, 2000, he joined the Institute of Intelligent Machines, Chinese Academy of Sciences as the Recipient of “Hundred Talents Program of CAS”. From Sept 2000 to Mar 2001, he worked as Research Associate in Hong Kong Polytechnic University. From Apr 2002 to Jun 2003, he worked as Research Fellow in City University of Hong Kong. From Aug. to Sept. 2003, he visited the George Washington University as visiting professor, Washington DC, USA. From Oct. to Dec. 2003, he worked as Research Fellow in Hong Kong Polytechnic University. From Jul to Dec 2004, he worked as the University Fellow in Hong Kong Baptist University, From Mar 2005 to Mar 2006, he worked as the Research Fellow in Chinese University of Hong Kong. From Mar 20 to Jul 20, 2006, he worked as visiting professor in Queen’s University of Belfast, UK. From Oct 26 to Nov 26, 2007, from Nov. 2 to Dec.2, 2008 and from Jun 29 to Jul 29, 2009, he visited Inha University of Korea as visiting professor.

Dr. Huang is currently a senior member of the IEEE, and associated editors of several main-stream international journals. He is the General Chairman or Steering Committee Chairman of International Conference on Intelligent Computing in 2005, 2006, 2007, 2008 & 2009, and Program Chairman of several Chinese National Conferences. He has published over 240 papers. Also, in 1996, he published a book entitled “Systematic Theory of Neural Networks for Pattern Recognition”, which won the Second-Class Prize of the 8th Excellent High Technology Books of China, and in 2001 second book entitled “Intelligent Signal Processing Technique for High Resolution Radars”, and in 2009 third book entitled “The Study of Data Mining Methods for Gene Expression Profiles”. His research interests include Pattern Recognition, Biological and Artificial Neural Networks, Image Processing and Bioinformatics.

Hardware-in-the-Loop Simulation of a Flapping Wing MAV
Friday, October 09, 2009
Christopher Mcmurrough

Read More

Hide

Abstract: Micro Air Vehicles (MAVs) have a wide range of potential applications including defense, surveillance, and search and rescue. Current MAVs are generally designed as miniature rotocraft or fixed wing airplanes. Bio-inspired flapping wing MAVs are desirable for reasons of efficiency and robustness to aerodynamic disturbances. The trend of miniaturization in electro-mechanical systems has made development of insect-scale flapping wing MAVs a reality.

In this presentation, the feasibility of a 5 DOF simulation-tested controller developed at the US Air Force Research Lab for flapping wing aircraft is discussed. A practical actuator capable of meeting the high performance requirements of the controller is presented with mechanical, electrical, and algorithmic considerations. Initial results of the actuation system are presented, as well as a future hardware-in-the-loop simulation for controller verification.

Biography: Christopher McMurrough is a M.S.C.S.E. student at The University of Texas at Arlington, supervised by Dr. Frank Lewis and Dr. Sajal Das. He has been a researcher at The Automation and Robotics Research Institute (ARRI) since 2006, and a summer researcher at the US Air Force Research Lab in Dayton Ohio since 2008. His interests include micro air vehicles, mobile robots, cooperative robotics, and embedded systems.

Study on a Scalable Peer-to-Peer Lookup Protocol
Friday, October 02, 2009
Stella (Hyun Jung) Choe

Read More

Hide

Abstract: Recently peer-to-peer (P2P) overlay networks are seen as a promising platform for large scale distributed systems
without centralized controls. One advantage of P2P networks is its resilience in terms of data replication, routing
recovery, and static resilience related to the detection of failures. This advantage calls for an issue of peer-to-peer
applications: how to store data in the efficient location and retrieve data within expected time.

The Resource Location and Discovery (RELOAD) Base Protocol has been developed in order to efficiently store and
retrieve data in the overlay. RELOAD is designed for the use of the Session Initiation Protocol (SIP) in P2P networks
where the session establishment and management are handled by a collection of peer (and clients), rather than a
centralized server.

In this presentation, we introduce a distributed lookup protocol, called Chord, which is a part of the RELOAD base protocol.
The main feature of Chord is its simplicity. Each Chord node maintains information about O(log N) other nodes and allows
nodes to route O(log N) messages to other nodes for efficient lookup. Experimental results show that Chord is scalable and
achieves good performance with up-to-date information, but the performance degrades with out-of-date information.

Biography: Ms. Hyun Jung (Stella) Choe is a Ph.D. candidate supervised by Dr. Sajal K. Das in the Department of Computer Science
and Engineering at the University of Texas at Arlington. She is also a researcher in CReWMaN. She received her BS and
MS degrees from Sungshin Women's University, South Korea, in 1999 and 2001 respectively. Her current research interests
also focus on quality of service and communication protocols in wireless sensor networks.

Providing Voice Service Continuity in Evolved Packet Systems
Friday, September 18, 2009
Wei Wu

Read More

Hide

Abstract: The next generation mobile cellular networks called Evolved Packet Systems
(EPS) are being standardized and developed to provide Internet Protocol
(IP)-based mobile data services as well as traditional voice service.
As voice service is still deemed as one of the key EPS services, mobile
operators have to make sure the subscribers enjoy the same quality of user
experience for the voice service while new IP-based data services become
available. In this talk, an overview of the EPS network architecture and
its features will be given as a background introduction. Then, we will
review three EPS solutions for voice service continuity, namely Single
Radio Voice Call Continuity (SRVCC), Circuit Switched Fall-Back (CSFB)
and Voice over Long Term Evolution via Generic Access (VoLGA). The impacts
of these solutions on the user experience are analyzed and compared in
terms of call interruption, call setup delay, impact on data services and
device battery consumption.

Biography: Dr. Wei Wu currently serves as a Member of Technical Staff, Advanced
Technology for Research in Motion (RIM), Ltd. He has been involved in the
standard research on 3GPP Evolved Packet Systems since he joined RIM in 2006.
His research interests include quality of service (QoS) and mobility
management for wireless Internet, mobile network and protocol simulation,
and peer-to-peer networking.

Before joining RIM, Dr. Wu has worked as a systems engineer at Alcatel USA
since 2004, where he worked on the system and architecture design of NGN
wireless soft-switch for 2G/3G cellular networks including GSM, UMTS and UMA.
Before that, he worked as an intern in startups including Cyneta Networks
and Spatial Wireless.

Dr. Wu holds a Ph.D. degree in computer science and engineering from the
University of Texas at Arlington, and B.Eng. and M.Eng. degrees both in
electrical engineering from Southeast University, Nanjing, China.

Progress and Trends in Wireless Convergence
Friday, September 04, 2009
Jogen Pathak

Read More

Hide

Abstract: With increasing globalization and rapid increases in the mobile workforce,
businesses are looking for cost effective ways to provide mobile voice and
data applications to improve productivity, control cost and decision time.
The need to stay connected while inside and outside of campus with need for
multiple applications at the fingertips, such as salesforce.com is ever
increasing demanding mobile broadband.

Several point solutions have emerged, but they require carrying multiple
devices with multiple numbers, thus increasing cost and complexity while
lowering utility. Wireless convergence is realizing trend to achieve the
best coverage, speed and ease of use.

This cutting-edge seminar will address technical challenges in wireless
convergences, as well as current solutions and future trends. To this end,
University campuses have a greater role to play in both researching
solutions for problems currently faced as well as evangelizing the use of
new technologies such as uMobility available from a Metroplex company,
Varaha Systems - A Mobile Desk, Global Reach solutions provider.

Biography: Jogen Pathak is the founder and CEO of Varaha Systems, Dallas. He has a
long career in wireless networking technologies research and inovation.
Previously he was with Nortel Networks and also a CTO of Cyneta Networks.
He has collaborated with CReWMaN in the past and would like to initiate
new collaborations. He will also be happy to talk with students interested
in mobile networks and systems research.
--------------------------------------------------------------------
For details on CReWMaN research, please visit http://crewman.uta.edu

Graph Models for Digital Image Processing and Pattern Recognition
Monday, August 24, 2009
Bin Luo

Read More

Hide

Abstract: In this talk, I will concentrate on graph-based methods for digital image processing and pattern recognition. After a brief introduction of why graph models are more and more popular in digital image processing and pattern recognition community, I’ll show different graph models for digital images.
Then, two research topics of graph based methods, graph matching and graph spectral analysis of digital images, will be introduced in more details. After showing some experimental results of our research, I’ll draw some conclusions and predict some topics which might be interesting for exploring.

Biography: Professor Bin LUO is the Dean of the School of Computer Science and Technology, Anhui University. He received Ph.D. degree in Computer Science from the University of York, the United Kingdom, in 2002. He was a research associate in the University of York from 2002 to 2004, and a research fellow of British Telecom in 2006. In 2007, he visited the University of Greenwich, England as a visiting professor. He served as a visiting fellow of the University of New South Wales, Australia in 2008.

Professor Luo has published some 160 papers in journals, edited books and refereed conferences. His current research interests include graph spectral analysis, large image database retrieval, image and graph matching, statistical pattern recognition, digital watermarking and information security.

At present, he chairs the IEEE Hefei Subsection. He was one of the general chair of the International Symposium on Information Technologies and Applications in Education, held in Xiamen city of China in 2008. He worked as a peer reviewer of international academic journals such as IEEE Trans. PAMI, Pattern Recognition, Pattern Recognition Letters, International Journal of Pattern Recognition and Artificial Intelligence, Knowledge and Information Systems, and Neurocomputing etc. He was on the program committee of many international conferences.

Mobile Embedded Security
Friday, May 01, 2009
Osman Koyuncu

Read More

Hide

Abstract: As open mobile devices become more prevalent across the user-base, various stakeholders would like to be able to develop applications, extend business models by accessing all the features of the handset. These devices not only include smartphones but extend to portable media processors, navigation and gaming devices. Finding a balance which allows developers to cultivate the mobile community, yet which prevents malicious actions by a minority is a difficult yet necessary task. Openness brings new challenges and threats to the mobile industry.

This talk will give an introduction to the mobile embedded security challenges and needs. It will also touch upon emerging applications and trends.

Biography: Mr. Osman Koyuncu received his B.Sc. in Computer Engineering from Middle East Technical Universtity, Turkey and M.Sc. in Computer Science from University of North Texas. He is currently pursuing Ph.D. degree in Computer Science and Engineering at UTA. He worked in various development, lead and architect roles in semiconductor industry. He is currently a Security Software Architect in the Wireless Terminals Business Unit in Texas Instruments Inc. Prior to joining TI in 2005, he worked for Fujitsu for 8 years.

Graph Based Semi-Supervised Learning
Wednesday, April 29, 2009
Fei Wang

Read More

Hide

Abstract: Graph based semi-supervised learning (GBSSL) has attracted considerable interests from the field of machine learning and data mining, and they have widely been applied in computer vision, information retrieval, and bioinformatics. In this talk, I will present a general configuration of GBSSL and a representative algorithm --- label propagation, and a multilevel scheme to make it more efficient, including some applications in image segmentation, text classification and information retrieval.

Biography: Dr. Fei Wang received his Ph.D. degree from Department of Automation, Tsinghua University. He is now a postdoctoral research fellow in School of Computing and Information Sciences, Florida International University. His main research interests include machine learning, data mining and computer vision. He has published over 50 papers on the related conferences and journals including ICML, CVPR, KDD, AAAI, TPAMI, TKDE.

Embedded Links: A Misunderstood and Fundamental Element of Urban-Scale Networks
Tuesday, April 21, 2009
Joseph Camp

Read More

Hide

Abstract: Many urban communities have unequal access to Internet resources, presenting a technical challenge of providing a high-speed access infrastructure at an extremely low cost. To address this challenge, we have deployed a first-of-its-kind, urban-scale wireless mesh network which provides Internet access to 1000's of users spanning multiple square kilometers in an underserved area in Houston, TX. However, in this and other urban environments, IEEE 802.11 node interactions are affected by a vast array of factors including topology, channel conditions, modulation rate, packet sizes, and physical layer capture. In this talk, I overview findings across many different scales from 100's of thousands of urban measurements and the development of an analytical model to understand the performance of embedded links in the aforementioned complex scenario. Then, I focus on a fundamental concept involving embedded links. Namely, choosing the modulation rate which maximizes the throughput is imperative since each bit of the (overly-) shared medium is critical. Yet, all existing rate adaptation mechanisms fail to track the ideal rate even in a simple, non-mobile urban scenario. Using a custom cross-layer framework, I implement multiple and previously un-implemented rate adaptation mechanisms to reveal the reasons for the failure and design modulation rate adaptation mechanisms which are able to track urban and downtown vehicular and non-mobile environments.

Biography: Joseph Camp is a PhD Candidate in the Electrical and Computer Engineering Department at Rice University. He received an M.S. at Rice and B.S. with honors from the University of Texas at Austin, both in ECE. Joseph is the lead grad student and Chief Network Architect for the Technology For All Network, a network which serves over 4,000 users in several square kilometers in Houston, TX. Additionally, he is a member of the team developing the Wireless Open-Access Research Platform (WARP), where he has designed novel rate selection protocols and experimentally evaluated them in diverse scenarios, including residential and downtown urban areas. Joseph is a technical program committee co-chair for the first ACM MobiHoc S^3 Workshop which is a first-of-its-kind technical venue which is "of the students, by the students, and for the students."

Using CyFi (Cypress PSoC and Radio) in Smart Sensor Networks
Tuesday, April 21, 2009
Patrick Kane

Read More

Hide

Abstract: The Cypress University Alliance (CUA) has been in existence since 2006.
Our mission is to ensure that educators and students have access to
Cypress technology for use in research and teaching.

Biography: Patrick Kane is the director of the CUA and has been since its
inception. Previously he held various technical and marketing roles at
Xilinx for 13+ years.

Power Integrity in Nanometer VLSI Era
Wednesday, April 08, 2009
Min Zhao

Read More

Hide

Abstract: VLSI is a fundamental and prevailing technology in the cores of many high-tech products: from internet router to phone, from GPS receiver to defibrillator. This talk will starts with an introduction to the background and trend of VLSI technology. A typical VLSI design flow will also be described. As the VLSI technology scales into nanometer regime, its progress is facing several serious challenges. One chief challenge is power integrity, which requires a very stable and uniform delivery of power supply to hundreds of millions of transistors under highly dynamic conditions. The main focus of this talk will be on my research contributions to solving this challenge. The first part is on a hierarchical method for static and transient simulation of on-chip power delivery network. The second is on the optimization of on-chip decoupling capacitor allocation. The third is a novel algorithm for power pad placement. At the end, future research directions will be discussed.

Biography: Dr. Min Zhao received her Ph.D. degree in Electrical Engineering from University of Minnesota in 2000. From 2000 to 2007, she was with advanced tool group of Freescale Semiconductor (formerly Semiconductor Sector of Motorola) where she is the key developer of several critical design automation technologies including power supply network simulation and optimization, inductance modeling and clock network analysis. In 2007, she joined Magma Design Automation where she is in charge of the development of high frequency circuit simulation technology. Dr. Zhao has published nearly 30 technical papers in premier journals and conferences. She has served in the technical program committees of several international conferences.

Architectural Support for Memory Debugging and Program Monitoring
Wednesday, April 01, 2009
Guru Venkataramani

Read More

Hide

Abstract: Rapid advances in hardware technology have resulted in exponential growth of computing speeds and hardware platforms. Consequently, software is increasingly prone to bugs and security exploits. In order to detect these bugs, programmers need tools that continuously monitor program runtime behavior. Unfortunately, software-based tools degrade program performance by several orders of magnitude. Programmers are reluctant to use tools that are very slow. My research focuses on providing low-cost, efficient hardware solutions for memory debugging and security. In this talk, I will describe MemTracker, a novel mechanism that offers efficient and programmable hardware for memory debugging. Memory checkers usually associate state with memory words to track validity of memory access, eg., whether load instructions access initialized heap memory, whether a return address of a function has been modified etc. MemTracker offers efficient hardware to perform such memory checks with the flexibility to implement several different memory checkers. I will also discuss key design decisions along with how MemTracker can be integrated into a modern out-of-order processor.

Beyond debugging for correctness, ensuring scalable performance of parallel programs for multi-core architectures is also important. Performance debugging is a key area of research that addresses scalability challenges faced by programmers. I will briefly talk about opportunities in this area and my future research directions.

Biography: Guru Prasadh Venkataramani is a PhD candidate in the School of Computer Science at Georgia Institute of Technology where he is advised by Prof. Milos Prvulovic. Guru’s research area is computer architecture with emphasis on providing efficient and low-cost hardware support for software debugging, security and programmability. He is also interested in hardware solutions for performance tuning especially for multi-core and emerging many-core architectures.

Declarative Tracepoints: A Programmable and Application Independent Debugging System for Cyber Physical Systems
Wednesday, March 25, 2009
Qing Cao

Read More

Hide

Abstract: Our planet is becoming smarter with the emergence of three key enabling technologies: instrumentation as enabled by wireless sensor networks, cameras, and RFIDs; inter-connectivity as enabled by recent developments in mobile and pervasive computing; intelligence as enabled by embedded systems, autonomous cars, robots, and energy-efficient smart buildings. What is common to these advances is that they have a computational core that interacts with the physical world. These Cyber-Physical Systems (CPS) are engineered systems that require tight conjoining of and coordination between the computational and physical domains. As these systems become more and more complicated, their program correctness becomes a critical issue. In this talk, I will discuss the challenges of future CPS systems from the perspective of software, and present my work on declarative tracepoints, a programmable and application independent debugging system for these Cyber-Physical Systems. This system automates the debugging process by removing the human from the loop. We show that declarative tracepoints are able to express the core functionality of a range of previously isolated debugging techniques, such as EnviroLog, NodeMD, Sympathy, and StackGuard. We also demonstrate that it can be used to detect real bugs using case studies of three bugs based on the development of the LiteOS operating system.

Biography: Dr. Qing Cao is currently a postdoctoral research associate in Department of Computer Science at the University of Illinois at Urbana-Champaign. He got his Ph.D. degree from the University of Illinois in October, 2008, and his Master's degree from the University of Virginia. His advisor is Professor Tarek Abdelzaher. Dr. Cao is the author and co-author of over 25 papers in premier journals and conferences with over 500 citations. He is the lead developer of the LiteOS operating system with more than 450 downloads. Dr. Cao has received a number of awards, including the Vodafone Fellowship and the best paper candidate in ACM Sensys 2008. He is a member of both ACM and the IEEE Computer Society.

Representing and Transforming XML Bi-level Information
Monday, March 23, 2009
Sudarshan Murthy

Read More

Hide

Abstract: Superimposed applications (SAs) facilitate superimposing (that is, overlaying) of new information and structures (such as annotations) on parts (such as sub-documents) of existing base information (BI). In this setting, SA developers and users work with bi-level information, a combination of superimposed information (SI) and the referenced BI. The key information-management activities over bi-level information are: representation, access, transformation, and interchange.

We have designed a framework for SAs to support the four aforementioned activities. The framework provides means to represent and access bi-level information in different data models. It also defines a mechanism to transform bi-level information to alternative forms using declarative queries expressed in existing query languages and executed by existing query processors.

This talk focuses on representing and transforming bi-level information in the XML model. Specifically, it provides an overview of Sixml, an XML markup language to represent bi-level information; and Sixml Navigator, an alternative XML path navigator that improves both query expression and execution.

Biography: Sudarshan Murthy directs the Applied Research group in Wipro Technologies, headquartered in Bangalore, India. He began his PhD work at the erstwhile Oregon Graduate Institute, and completed it at the Portland State University. He has a Masters degree in CS from OHSU and a Bachelors degree in CS&E from the Bangalore University. Before joining graduate school, Sudarshan founded and operated (for six years) a software process-engineering business, and before that developed international-trade-finance applications for leading banks. He has also taught under-graduate and graduate classes in software engineering and data management.

Functional Programming Perspectives on Concurrency and Parallelism
Thursday, March 12, 2009
Matthew Fluet

Read More

Hide

Abstract: The trend in microprocessor design toward multicore processors has sparked renewed interest in programming languages and language features for harnessing concurrency and parallelism in commodity applications. Past research efforts demonstrated that functional programming provides a good semantic base for concurrent- and parallel-language designs, but slowed in the absence of widely available multiprocessor machines. I will describe new functional programming approaches towards concurrency and parallelism, grounded in more recent functional programming research.

To frame the discussion, I will introduce the Manticore project, an effort to design and implement a new functional language for parallel programming. Unlike some earlier parallel language proposals, Manticore is a heterogenous language that supports parallelism at multiple levels. In this talk, I will describe a number of Manticore's notable features, including implicitly-parallel programming constructs (inspired by common functional programming idioms) and a flexible runtime model that supports multiple scheduling disciplines. I will also take a deeper and more technical look at transactional events, a novel concurrency abstraction that combines first-class synchronous message-passing events with all-or-nothing transactions. This combination enables elegant solutions to interesting problems in concurrent programming. Transactional events have a rich compositional structure, inspired by the use of monads for describing effectful computations in functional programming. I will conclude with future research directions in the Manticore project, aiming to combine static and dynamic information for the implementation and optimization of parallel constructs.

Biography: I graduated with a Ph.D. in computer science from Cornell University in January 2007. My advisor was Greg Morrisett (now at Harvard University). I graduated with a B.S. in mathematics from Harvey Mudd College in 1999. My advisor was Arthur Benjamin. I am an active developer of MLton: an open-source, whole-program, optimizing Standard ML compiler. I am collaborating on the development of Manticore: a heterogeneous parallel programming language aimed at general-purpose applications running on multi-core processors. As a programming languages researcher, I am excited about the opportunities for mechanizing reasoning about programming languages. The POPLMark Challenge hopes to spark additional interest in this problem. As a result of discussions about the POPLMark Challenge, I have started using Twelf in my research, and I have collected a set of interesting examples. I participate in both HYPER, the Hyde Park programming languages reading group, and PL Group, a weekly forum for informal talks on relevant and interesting topics in programming languages.

Domain-Specific Language Extension for Correctness and Performance
Friday, March 06, 2009
Nathaniel Nystrom

Read More

Hide

Abstract: Modern computing environments present new software development challenges. The problems of concurrency, distribution, security, and extensibility must be addressed for today's software applications to be successful. These features are notoriously difficult to program, to test, and to debug. Programming languages can address these problems by allowing developers to express invariants to be used by compilers and other tools to rule out errors in programs before they are run and to generate more efficient code. A key challenge is providing language features that permit programmers to express application-specific invariants and permit construction of tools to use these invariants.

In this talk, I will describe my work on compilers and programming language features that enable construction of domain-specific extensions to X10, an object-oriented programming language for high-performance computing. X10 provides powerful mechanisms that enable users to extend the syntax and semantics of the core language. Annotations and compiler plug-ins allow programmers to refine the type information in the program and to perform static analyses on these types. X10's dependent type system allows programmers to specify invariants that are enforced by the compiler to rule out run-time errors and that are used to optimize code.

This talk is based on joint work with Vijay Saraswat, Jens Palsberg, Christian Grothoff, Andrew Myers, Michael Clarkson, Stephen Chong, and Xin Qi.

Biography: Nathaniel Nystrom is a postdoctoral researcher at IBM T.J. Watson Research Center in Hawthorne, NY. His research interests include programming languages, compilers, tools, and methodologies for constructing safe, secure, and efficient systems. He has done work on software extensibility, language-based security, programming language runtime systems, and compiler optimizations. He received his Ph.D. in Computer Science from Cornell University in 2007 and holds B.S. and M.S. degrees in Computer Science from Purdue University and an M.S.
in Computer Science from Cornell

From the Trenches: Real World Video Game Development and the Connection with Higher Education
Wednesday, March 04, 2009
Jim Galis

Read More

Hide

Abstract: The game industry has exploded over the last 10 years; making huge advancements in content, audience and technology. The video game market now rivals the motion picture industry with annual revenues of over $20 Billion, and the competition to make “the hit game” is intense. The only way that game developers have a chance of success is to employ the best talent the market has to offer. Grooming engineers and artists for the game industry requires specialized education at the higher levels. Incorporating game development focused course study into the BS or MS curriculums helps to insure graduates are prepared for the dynamic environment of a game studio. This presentation will provide an overview of the video game development business, with first-hand insight into what it takes to be on a game team, how it works and how a degreed graduate can be successful in the industry. Topics will also cover how college courses can be designed to inspire creativity, experience real world game development situations, promote team interaction and gain valuable knowledge that is essential at any game development studio.

Biography: Jim has spent the last 11 years as a Video Game Studio executive, successfully shipping 8 major titles; starting with Beetle Adventure Racing N64 for Electronic Arts, to the most recent Stuntman Ignition Xbox360/PS3/PS2 for THQ. He has managed large teams of Engineers, Artists, Designers and Producers, and is responsible for game project budgets in excess of $20M. He has experience in all phases of production; including use of Waterfall, Agile, SCRUM methods, expertise with In-Game Advertising, Marketing/PR support and Studio management. Prior to video games, he spent over 10 years as a software engineer in Visual Simulation, programming Graphics Real Time systems for flight simulators. He holds a BS CSE from the University of Texas at Arlington, and is an active member of ACM/SIGGRAPH, IGDA and a member of the Academy of Interactive Arts and Sciences.

Mobile Sensor Networks under Intermittent Connectivity
Wednesday, February 25, 2009
Hongyi Wu

Read More

Hide

Abstract: This talk centers on the Delay/Fault-Tolerant Mobile Sensor Network (DFT-MSN), in support of pervasive information gathering, which plays a key role in many military and civilian applications, ranging from environmental monitoring to pandemic alert and response. The mainstream sensor networking approach is to densely deploy a large number of small, highly portable, and inexpensive sensor nodes with low-power, short-range radio, forming a well connected wireless mesh network. This approach, however, does not work effectively in DFT-MSN, which has a few unique characteristics, including nodal mobility, sparse connectivity, delay tolerability, fault tolerability, and small nodal buffer space. In this talk, I will introduce the principles, protocols, analytic models, and prototyping and experimental evaluation of DFT-MSN.

Biography: Hongyi Wu received his B.S. degree from Zhejiang University, China in 1996; his M.S. degree in Electrical & Computer Engineering and Ph.D. degree in Computer Science from the State University of New York at Buffalo in 2000 and 2002, respectively. Since then, he has been with the Center for Advanced Computer Studies (CACS), University of Louisiana at Lafayette, where he was promoted to Associate Professor in Summer 2007 and appointed as the Alfred and Helen Lamson Endowed Professor in Computer Science in Fall 2008.

His research interests include wireless mobile ad hoc networks, wireless sensor networks, next generation cellular systems, and integrated heterogeneous wireless systems. He has served as chair and technical committee member of several IEEE conferences, and the guest editor of two special issues of ACM MONET. He has published more than seventy technical papers in leading journals and conference proceedings. He received NSF CAREER Award in 2004.


Channel Assignment in Wireless Mesh Networks: A General View and a New Heuristic Algorithm
Friday, February 20, 2009
Vanessa Gardellin

Read More

Hide

Abstract: Recently, there is an increasing interest in using Wireless Mesh Networks
(WMNs) as broadband backbone networks. WMNs are typically configured to
operate on a single channel using a single radio. The single channel
configuration adversely affects the capacity of the mesh due to the
interference from adjacent nodes in the network because all nodes compete
on the same channel. Equipping each node with multiple Network Interface
Cards (NICs) is emerging as a promising approach to improving the capacity
of WMNs. The IEEE 802.11 Wireless LAN standards allow multiple non-overlapping
frequency channels to be used simultaneously. Presence of multiple channels
requires to addressing the problem of which channel to use for a particular
transmission. The routing strategy in the network determines the load on each
802.11 interface, and in turn affects the bandwidth requirement and thus the
channel assignment of each interface. The joint channel assignment and routing
problem is NP-complete based on its mapping to a graph-colouring problem.
The presentation will illustrate how to manage the limited number of channels
and NICs and it will propose a Partitioned mesh network load and interference
aware channel assignment and routing (PaMeLA) algorithm.

Biography: Ms. Vanessa Gardellin is a PhD student in the Department of Computer Science
Engineering, University of Pisa. She received her Master Degree in Computer
Science Engineering in 2007 from the University of Pisa, Italy, under the
supervision of Prof. Luciano Lenzini. From January 2009 she is a visiting
scholar in the CReWMan Laboratory, Department of Computer Science and
Engineering, University of Texas at Arlington under the supervision of
Prof. Sajal Das. Her current research activity includes wireless mesh
networks (resource sharing, scheduling, routing, channel assignment),
cognitive radio and channel assignment in cellular networks with a
game-theoretic approach.

Information Visualization vs Visual Arts
Wednesday, February 04, 2009
Kang Zhang

Read More

Hide

Abstract: In this talk, we will study some of the theories and practices in visual arts, in particular abstract painting, and their potential usefulness for the aesthetic design of information visualization. We discuss the three dimensions of painting, i.e. form, color, and texture, various visual cognition principles, and finally aesthetic compositions used in abstract painting. Our objective is to bridge visual arts with information visualization, so that the latter could learn from the former in creating more aesthetic visualizations and thus making the viewers visualizing process a pleasant and engaging experience. Other research directions at the UTD Visual Computing Lab will also be briefly mentioned.

Biography: Kang Zhang is Professor, Associate Department Head, and Director of Visual Computing Lab of Computer Science Department at the University of Texas at Dallas. He received his B.Eng. in Computer Engineering from the University of Electronic Science and Technology, China, in 1982; and Ph.D. from the University of Brighton, UK, in 1990. Prior to joining UTD in the USA, he held academic positions in the UK and Australia.

Dr. Zhang's current research interests are in the areas of information visualization, visual programming and visual languages, and Web engineering; and has pulished over 170 papers in these areas. He has authored and edited four books. His research has been funded by the UK SERC, Australian Research Council, Sun Microsystems, Texas State, US NSF, and US Department of Education. He has been the General Chair and Program Chair of several major international conferences. Dr Zhang is also on the Editorial Boards of Journal of Visual Languages and Computing, and International Journal of Software Engineering and Knowledge Engineering. His home page is at www.utdallas.edu/~kzhang.




Vision Research in Demokritos
Wednesday, January 28, 2009
Dimitrios Kosmopoulos

Read More

Hide

Abstract: Goal of the speech is the presentation of recent work in the field of computer and robot vision in Computational Intelligence Lab in the National Center of Scientific Research "Demokritos". This work includes the tracking of moving targets under occlusions, behavior understanding of humans in indoor environments and the automated production of personalized videos in entertainment parks.

Furthermore, newly introduced techniques for robust classification of time series will be presented. For this purpose a new Hidden Markov Model employing an observation model of Student-t mixture will be presented as well as application results in gesture recognition, and biomedical applications (EEG, fMRI). The last part of the presentation will be about recently published techniques for modeling objects using 3D data (e.g. from laser scanners) with application in robotic "bin picking" applications. The research that will presented has been performed in the framework of several EU and national projects (SCOVIS, POLYMNIA, SemVeillance, PENED).



Biography: Dimitrios Kosmopoulos received the BSc in Electrical and Computer Engineering in 1997 from the National Technical University of Athens and the PhD degree in 2002 from the same institution. He has worked as technical coordinator in many research and industrial projects in the fields of computer vision, multimedia analysis and robotics. Before joining NCSR "Demokritos" as a Research Scientist, he was employed in National Technical University of Athens and inos Automations software (Germany). He is a visiting Professor in the Technical Educational Institute of Athens and was a Lecturer in the University of Peloponnese.

Education: Are there any questions?
Friday, December 05, 2008
Yale Patt

Read More

Hide

Abstract: "After more than 40 years of teaching, I have acquired more than a few opinions on education: The CORRECT way to introduce serious students to computers, why bottom-up is better than top-down and much better than this latest notion -- top-up. My personal set of rules (my Ten Commandments) for being a good teacher. What is wrong with distance learning. JAVA vs. other religions. Why high tech can be the enemy of education. With so many different topics, how do I know I am talking about something the audience wants to hear about? Ergo, the title.
I will start with a few slides about some of the items above, until someone asks a question. I will go from there...until someone asks another question. This talk has no compass to get us back on track since there is no track."

Biography: Dr. Patt teaches the required freshman Intro to Computing course to 400 first-year students every other fall and the advanced graduate course to PhD students in micro-architecture every other spring. He currently directs the research of nine PhD students, while at the same time having some success at research and consulting in the high-tech microprocessor area. His research ideas (HPS, branch prediction, etc.) have been adopted by almost every microprocessor manufacturer on practically every high-end chip design of the past 10 years.

Dr. Patt has earned appropriate degrees from reputable universities and has received more than his share of prestigious awards for his research and teaching. More detail on his interests and accomplishments
may be obtained from his web site: www.ece.utexas.edu/~patt.

Towards Robust Trust Establishment in Online Communities with SocialTrust
Monday, November 24, 2008
James Caverlee

Read More

Hide

Abstract: Web 2.0 promises rich opportunities for information sharing, electronic commerce, and new modes of social interaction, all centered around the "social Web" of user-contributed content, social annotations, and person-to-person social connections. But the increasing reliance on this "social Web" also places individuals and their computer systems at risk. In this talk, we identify a number of vulnerabilities inherent in online communities and study opportunities for malicious participants to exploit the tight social fabric of these networks. With these problems in mind, we propose the SocialTrust framework for tamper-resilient trust establishment in online communities. Two of the salient features of SocialTrust are its dynamic revision of trust by (i) distinguishing relationship quality from trust; and (ii) incorporating a personalized feedback mechanism for adapting as the community evolves. We experimentally evaluate the SocialTrust framework using real online social networking data consisting of millions of MySpace proles and relationships. We find that SocialTrust supports robust trust establishment even in the presence of large-scale collusion by malicious participants.

Biography: James Caverlee is an Assistant Professor of Computer Science at Texas A&M University. Dr. Caverlee directs the Web and Distributed Information Management Lab at Texas A&M and is also affiliated with the Center for the Study of Digital Libraries. At Texas A&M, Dr. Caverlee is leading research projects on (i) SocialTrust: Trusted Social Information Management; (ii) SpamGuard: Countering Spam and Deception on the Web; and (iii) Distributed Web Search, Retrieval, and Mining. Dr. Caverlee received his Ph.D. from Georgia Tech in 2007 (advisor: Ling Liu; co-advisor: William B. Rouse). Dr. Caverlee graduated magna cum laude from Duke University in 1996 with a B.A. in Economics. He received the M.S. degree in Engineering-Economic Systems & Operations Research in 2000, and the M.S. degree in Computer Science in 2001, both from Stanford University.

Boosting Schema Matchers
Friday, November 07, 2008
Avigador Gal

Read More

Hide

Abstract: Schema matching is recognized to be one of the basic operations required by the process of data and schema integration, and thus has a great impact on its outcome. We propose a new approach
to combining matchers into ensembles, called Schema Matcher Boosting (SMB). This approach is based on a well-known machine learning technique, called boosting. We present a boosting algorithm for schema matching with a unique ensembler feature, namely the ability to choose the matchers
that participate in an ensemble. SMB introduces a new promise for schema matcher designers. Instead of trying to design a perfect schema matcher that is accurate for all schema pairs, a designer can focus on finding better than random schema matchers. We provide a thorough comparative empirical results where we show that SMB outperforms, on average, any individual matcher. In our experiments we have compared SMB with more than 30 other matchers over a real world data of 230 schemata and several
ensembling approaches, including the Meta-Learner of LSD. Our empirical analysis shows that SMB is shown to be consistently dominant, far beyond any other individual matcher. Finally, we observe that SMB performs better than the Meta-Learner in terms of precision, recall and F-Measure.

Biography: Avigdor Gal is an Associate professor at the Faculty of Industrial Engineering & Management at the Technion. He received his D.Sc. degree from the Technion in 1995 in the area of temporal active databases. He has published more than 80 papers in journals (e.g. Journal of the ACM (JACM), ACM Transactions on Database Systems (TODS), IEEE Transactions on Knowledge and Data Engineering (TKDE), ACM Transactions on Internet Technology (TOIT), and the VLDB Journal), books (Temporal Databases: Research and Practice) and conferences (ICDE, ER, CoopIS, BPM) on the topics of data integration, temporal databases, information systems architectures, and active databases.

Avigdor is a steering committee member of IFCIS, a member of IFIP WG 2.6, and a recepient of the IBM Faculty Award for 2002-2004. He is a member of the ACM and a senior member of IEEE.

Parameterized Unit Testing with Pex, a White Box Test Input Generation Tool for .NET
Monday, November 03, 2008
Nikolai Tillmann

Read More

Hide

Abstract: Pex (http://research.microsoft.com/Pex) is an automatic test generation tool for .NET developed at Microsoft Research. Pex discovers boundary conditions in code that cause failures and generates traditional unit test suites with high code coverage. This is achieved by a systematic exploration of feasible execution paths of the program, using a constraint solver to compute test inputs, which will take the program along each path. Pex enables Parameterized Unit Testing, an extension of traditional unit testing that reduces test maintenance costs. Pex has been used in Microsoft to test core .NET components. Pex is integrated into Microsoft Visual Studio.

Biography: Nikolai Tillmann is a Principal Software Design Engineer at Microsoft Research. He is currently leading the Pex project, a framework for runtime verification and automatic test case generation for .NET applications based on parameterized unit testing, dynamic symbolic execution, and an SMT solver. Previously, he worked on AsmL, an executable modeling language, and Spec Explorer, a model-based testing tool. His research interests include program specification, analysis, testing, and verification. He received his M.S. ("Diplom") in Computer Science from the Technical University of Berlin in 2000.

Why I Can Debug Some Numberical Programs But You Can't
Monday, November 03, 2008
William Kahan

Read More

Hide

Abstract:

Biography: Professor Kahan is best known for his contribution to IEEE 754
floating point arithmetic standard for which he won the 1989 *ACM
Turing Award*, widely regarded as the Computer Science equivalent of *Nobel Prize*. Professor Kahan is a member of *The National Academy of Arts &
Sciences*, a foreign (Canadian) associate of *National **Academy of
Engineering*, and an* **ACM* Fellow.

Data Mining Problems in Sensor Networks
Tuesday, April 22, 2008
Dimitrios Gunopulos

Read More

Hide

Abstract: Sensor networks of inexpensive, efficient, and lighweight nodes are enabling the recording of the physical world with unprecedented capacity. Large-scale sensor network deployments have already emerged in environmental and habitat monitoring, healthcare, seismic and structural monitoring, industry manufacturing and military missions. The issue of data management of this deluge of sensor data has become of paramount importance in recent years. In this talk we discuss recent progress and key technical challenges for reliable data management in sensor networks. We focus on in-network data storage and data analysis techniques, as well as data stream analysis techniques. We address the challenges and research opportunities that arise in the area, and present open problems for future research.

Biography: Dimitrios Gunopulos got his PhD from Princeton University. He has held regular and visiting positions at the Max-Planck-Institute for Informatics, the University of Helsinki, the IBM Almaden Research Center, the Department of Computer Science and Engineering, University of California, Riverside, and the Department of Informatics, University of Athens. His research is in the areas of Data Mining and Knowledge Discovery in Databases, Databases, Sensor Networks, Peer-to-Peer systems, and Algorithms. His research has been supported by NSF (including an NSF CAREER award and an ITR award), the DoD, the Institute of Museum and Library Services, the Tobacco Related Disease Research Program, the European Commission, and AT&T. He has served as a Program Committee co-Chair in the 2008 IEEE ICDM and the ACM SIGKDD 2006.

Fluid Animation Methods for Movie Special Effect
Monday, April 14, 2008
Byungmoon Kim

Read More

Hide

Abstract: Over the past decades, computer graphics have expanded its application from CAD to movie special effects and game industries thanks to significant advances in rendering, mesh modeling (we will briefly discuss a shadow renderer and a mesh filter), and animation methods. Among these, realistic animations of fluids, rigid or flexible objects, or fractures cannot be obtained by artists at an affordable cost, or by engineering/scientific simulations that are not directly applicable to versatile animation problems. Therefore, computer graphics community has performed massive researches on simulation-based computer animation. Recent advances in such researches have led to a significant increase in realism and have benefited the digital entertainment industry. We will discuss two such methods for improving the realism in fluid simulations: (1) The improved BFECC advection that increases the dynamic in simulated fluid motion and (2) a volume control technique that prevent the loss of fluid volume. The BFECC (Back and Forth Error Compensation and Correction) was recently developed for interface computation using a level set method. We show that BFECC can be applied to reduce dissipation and diffusion encountered in a variety of advection steps, such as level set, velocity, smoke density, image, and dye advections on uniform and adaptive grids and on a triangulated surface.
BFECC provides second-order accuracy in both space and time. Liquid or gas simulated by the level set method can suffer from a slow but steady volume error that accumulates to a visible amount of volume change. We propose to address this problem by using the volume control method. We trace the volume change of each connected region, and apply a carefully computed divergence that compensates undesired volume changes. To compute the divergence, we construct a mathematical model of the volume change, choose control strategies that egulate the modeled volume error, and establish methods to compute the control gains that provide robust and fast reduction of the volume error, and (if desired) the control of how the volume changes over time.

Biography: Dr. ByungMoon Kim received a Ph. D. in computer science in 2006 at the Georgia Institute of Technology. Short after that, he joined NVIDIA Corp, where he worked on graphics device driver development, real time graphics research, and physics simulations. His research interests are in computer graphics, focusing on fluid simulation, geometry processing such as mesh filtering and editing, and haptic devices.


Dynamic Data Replication Schemes for Mobile Ad-hoc Network Based on Aperiodic Updates
Wednesday, March 26, 2008
Sanjay Madria

Read More

Hide

Abstract: Traditional replication schemes are passive in nature, and rarely consider the characteristics of mobile ad hoc network environment. In this talk, I will present three dynamic data replication schemes for mobile ad-hoc networks. I propose replication algorithms by considering aperiodic updates and integrating user profiles consisting of mobile users' mobility schedules, access behavior and read/write patterns.
These schemes actively reconfigure the replicas to adjust to the changes in user behavior and network status. I will present replication algorithms and their performance evaluation in an environment where data items are updated aperiodically, and where frequency of access to each data objects from mobile hosts and the status of network connection are also considered. I will also talk about some new consistency measures in this new environment. This talk is mainly based on IEEE Transactions on Mobile Computing Journal paper appeared in Nov. 2006.

Biography: Sanjay Kumar Madria received his Ph.D. in Computer Science from Indian Institute of Technology, Delhi, India in 1995. He is an Associate Professor, Department of Computer Science at the University of Missouri-Rolla, USA. Earlier he was Visiting Assistant Professor in the Department of Computer Science, Purdue University, West Lafayette, USA.
He has also held appointments at Nanyang Technological University in Singapore. He has published more than 120 Journal and conference papers in the areas of mobile and sensor data management. He has organized International conferences, workshops and presented tutorials in the areas of mobile computing. He has given invited talks and served as panelists in National Science Foundation, USA and Swedish Research Council. His research is supported by NSF, DOE, UMRB and industrial grants for over $1.6M. He was awarded JSPS fellowship in 2006. He is IEEE Senior Member and IEEE CS Distinguished Speaker.

BehaviorScope: A Low-power Sensor Network Architecture for Understanding Human Activities over Space and Time
Monday, March 10, 2008
Dimitrios Lymberopoulos

Read More

Hide

Abstract: I will present BehaviorScope, a flexible, low power sensor network architecture for studying and interpreting human activities and behaviors over space and time. All the different aspects of the BehaviorScope system, ranging from the low power platform and sensing architectures up to the interpretation mechanisms and abstractions used, will be presented and demonstrated. The main idea behind the proposed system is that human behaviors can be decomposed into sequences of very primitive actions that take place over space and time. Different activities can be described by simply combining these primitive actions over time in different ways. A multimodal wireless sensor network monitoring humans over space and time provides a stream of basic sensing features, called phonemes. This set of phonemes becomes the human activity alphabet. By hierarchically parsing the network-detected phonemes over time into primitive actions and then into simple activities and macroscale behaviors, we manage to do in the human activity domain the analogous of natural language processing, where letters are combined to form words, words are combined to form sentences, stories and so on. My talk will describe how different activity types can be detected using probabilistic grammars and how to develop activity models from sensor data. The architectural implications of this approach will also be discussed.

Biography: Dimitrios Lymberopoulos is a Ph.D. candidate at the Department of Electrical Engineering at Yale University where he has been working under the supervision of Andreas Savvides since 2003. He received his Diploma from the Computer Engineering and Informatics Department at the University of Patras, Greece in 2003. His research focuses on design and implementation of a low power sensor network architecture for understanding human behaviors over space and time. Other aspects of his work include the design and implementation of power-aware sensor node architectures and the exploration of different sensor physical layers for node localization. In 2006 he was the recipient of a Microsoft Research Fellowship that has been supporting his graduate studies over the last two years.

System-Level Modeling for Embedded System Design Automation
Wednesday, March 05, 2008
Andreas Gerstlauer

Read More

Hide

Abstract: Embedded computer systems are ubiquitous, integrated into many devices we interact with on a daily basis. They are characterized by their application-specific nature and tight constraints. Driven by ever increasing application demands and technological advances that allow us to put complete multi-processor systems on a chip (MPSoCs), system complexities are growing exponentially. This makes the process of designing embedded systems a tremendous challenge and traditional design methods infeasible.
In this talk, I will present an approach for automation of the design process at the electronic system level (ESL). The key to any automated design flow are well-defined abstraction levels, models and transformation steps in between. We have developed such concepts and techniques for modeling both system computation and communication at various levels of abstraction and across hardware and software boundaries. In a complete modeling flow, all models can be automatically generated from an abstract input specification. Models support validation through simulation and analysis with generally high accuracy and little overhead. Furthermore, models have been defined such that they can be automatically synthesized into the final system hardware and software. Tools based on this work have been integrated under a common GUI in the System-On-Chip Environment (SCE), and we have applied SCE to a wide variety of industrial-size design examples. Results show the feasibility and benefits of the approach for rapid, early design space exploration, demonstrating that significant productivity gains can be achieved.

Biography: Andreas Gerstlauer received his Ph.D. degree in Information and Computer Science from the University of California, Irvine (UCI) in 2004. He is currently an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, working on electronic system-level (ESL) design tools. Commercial derivatives of such tools are in use at the Japanese Aerospace Exploration Agency (JAXA), NEC Toshiba Space Systems and others. Dr. Gerstlauer's research interests include system-level modeling, languages, methodologies, and embedded hardware and software synthesis.

Combining static and dynamic analyses for automated bug-finding
Monday, March 03, 2008
Christoph Csallner

Read More

Hide

Abstract: Finding bugs is like finding a few needles in an infinitely large haystack of program execution paths. False bug warnings are one of the biggest problems, both for automated correctness provers (such as type systems and model-checkers) and for automated bug-finders (such as static bug-pattern matchers). To address this problem, I will present three techniques for turning an existing, powerful, but false-positive-ridden, static analysis into a precise tool for automatic bug-finding.

First, we will automatically convert the output of a static analysis to concrete JUnit test cases, using constraint solving techniques. We thereby eliminate language-level false bug warnings and make the results easier to understand for human consumers. We will then add a dynamic invariant inference step to also address the harder problem of bug warnings that are technically correct but still irrelevant to the user (these bugs could occur, but only under obscure conditions). Finally, we will adapt dynamic invariant inference to work correctly with subtyping.
Previous approaches do not take behavioral subtyping into account and therefore produce imprecise or inconsistent results, which can throw off automated analyses such as the ones we are performing for bug-finding.

I have implemented these techniques in the JCrasher, Check 'n' Crash, and DSD-Crasher automatic testing tools, which have been used by multiple research groups.

Biography: Christoph Csallner is currently a Ph.D. candidate at Georgia Tech, advised by Professor Yannis Smaragdakis. He worked on automated bug-finding for Google and Microsoft Research. He has received two Distinguished Paper Awards--the first one at ISSTA 2006 (the ACM SIGSOFT International Symposium on Software Testing and Analysis) and the second one at ASE 2007 (the IEEE/ACM International Conference on Automated Software Engineering).

Cross-Layer Customization Platform for Low-Power and Real-Time Embedded Applications
Thursday, February 28, 2008
Xiangrong Zhou

Read More

Hide

Abstract: Modern embedded applications have become increasingly complex and diverse in their functionalities and requirements.
Data processing, communication and multimedia signal processing, real-time control and various other functionalities can often need to be implemented on the same System-on-Chip(SOC) platform. The significant power constraints and real-time guarantee requirements of these applications have become significant obstacles for the traditional embedded system design methodologies. The general-purpose computing microarchitectures of these platforms are designed to achieve good performance on average, which is far from optimal for any particular application. The system must always assume worst-case scenarios, which results in significant power inefficiencies and resource under-utilization.

In my current research, we introduce a cross-layer application-customizable embedded platform, which dynamically exploits application information and fine-tunes system components at system software and hardware layers. This is achieved with the close cooperation and seamless integration of the compiler, the operation system, and the hardware architecture. The compiler is responsible for extracting application regularities through static and profile-based analysis. The relevant application knowledge is propagated and utilized at run-time across the system layers through the judiciously introduced reconfigurability at both OS and hardware layers. The introduced framework comprehensively covers the fundamental subsystems of memory management and multi-tasking execution control.

Biography: Xiangrong Zhou is a Ph.D. candidate in the Department of Electrical and Computer Engineering at University of Maryland, College Park. His current research interests include embedded systems, computer architecture, reconfigurable computing platforms, and hardware/software codesign. He received his B.S.
degree from the Department of Automation at Tsinghua University, China in 1999. From 2001 to 2004, He worked as a Member of Technical Staff at Hughes Network Systems, Maryland in developing satellite remote terminals and gateways with various embedded processor, DSP and FPGA.

Runtime Monitoring for Reliable Software
Monday, February 18, 2008
Feng Chen

Read More

Hide

Abstract: Runtime monitoring of requirements in software development can increase the reliability of the resulting systems. On the one hand, if monitoring is used as integral part of a system to detect and recover from requirements violations at runtime, monitoring can increase the dependability and safety of the deployed system by guiding the running system to avoid catastrophic failures. On the other hand, if used to detect errors in programs, monitoring can bring more rigor to testing, because monitors can observe not only functional behaviors of programs at specific points, but also temporal behaviors that can refer to complex patters and histories of actions.
In this talk, I will discuss a generic and efficient monitoring framework, called monitoring oriented programming (MOP), as well as the predictive runtime analysis, a technique that effectively and soundly predicts concurrency bugs during testing by improving the coverage of runtime monitoring using static information.

Biography: Mr. Feng Chen is a graduate student in the Department of Computer Science, University of Illinois at Urbana-Champaign. His research is in the area of program analysis, focusing, in particular, on using runtime monitoring and static analysis to increase the reliability of software. Other interests include formal methods, programming language semantics and design, and their use in software development. Feng received his BS and MS degrees in Computer Science from Peking University in 1999 and 2002, respectively.

Come see RoPro 2008 on February 9th in the UTA Nedderman Hall atrium
Saturday, February 09, 2008
RoPro 2008

Read More

Hide

Abstract: RoPro 2008 is the 8th annual CSE @ UTA High School Robot Programming Contest. This event is an outreach and recruiting activity of the CSE department that is run completely by our faculty, staff, and students. This year we are expecting over 100 high school students from 9 area high schools to compete. Come visit the event between 9am and 2pm to see what amazing things these future UTA students can do!

Biography:

Innovations from Sun Microsystems (for 26 years)
Friday, February 08, 2008
Conrad Geiger

Read More

Hide

Abstract: Since Sun's founding in 1982, they have continually focused on investing in technical talent to fulfill new ideas and deliver new, innovative technology. In doing so, Sun continues today to provide extra value for their customers as well as leadership in the area of open standards for the rest of the computing industry. Conrad Geiger will speak about recent innovative technology and ideas that have come from Sun that should be especially interesting and educational for all university and research customers. The topics covered during the talk will include industry leading software, storage, networking (including SUN SPOT wireless sensors) and computing systems components (including supercomputer implementation). Sun invests heavily in our education and research customer's future. Sun even offers free online software and hardware training classes for all of our university customers! Come and hear about these free offerings which will also be discussed.

Biography: Conrad Geiger, Principal Engineer, first joined Sun Microsystems in 1987 in Seattle, Washington. He left Sun for a few years to work for a Steve Jobs startup: NeXT Computer which developed what is the basis for today's MacOS 10 at Apple.

Since 1994, Conrad has worked exclusively with Sun's education and research customers out of Sun's Austin, Texas facility. Conrad's largest customer this year is the University of Texas at Austin who is now deploying the largest, publicly available supercomputing infrastructure at 500+ TeraFlops, all based on Sun's HPC products and innovative technology.

Prior to working for Sun, Conrad worked at IBM, NBI and Boeing. Conrad received his undergraduate degree from the University of Texas at Austin with additional studies at the University of Colorado and University of Washington.

Asynchronous Communication and Computing in Wireless Sensor Networks
Friday, February 01, 2008
Dr. Yonghe Liu

Read More

Hide

Abstract: In stark contrast with traditional data forwarding networks exemplified by the Internet, wireless sensor networks are uniquely characterized by drastically low data rate, often at several bytes per minute, owing to application specific requirements. In existing scheme, energy efficiency has overwhelmingly relied on coordinated sleep/wakeup schemes, where communications are synchronized into a short time window. Inevitably this will augment the collision probability and irrelevant packet listening, the two dominant power consumption components in wireless networks. In this talk, we describe an innovative asynchronous communication architecture, in which a sensor node is allowed to directly write data into a special, reactive module (RFID tag based) residing on the receiving node while its main platform (the central controller) is asleep. The result is a store-and-forward, asynchronous communication pattern which can achieve ultra energy efficiency.

Biography: Dr. Yonghe Liu received his Ph.D. degree in Computer Engineering from Rice University. Prior to joining UTA in 2005 as an Assistant Professor in the Department of Computer Science and Engineering, he worked at Texas Instruments. His research interests lie in various aspects of wireless networking and system integration. An active faculty of CReWMaN, Dr. Liu directs the Security and Sensor Networking Lab (SSN) and funded projects supported by Texas Advanced Research Program and National Science Foundation.

Managing Unstructured Data for Content Intelligence
Thursday, January 31, 2008
Jerry O'Brien

Read More

Hide

Abstract: The functions of organizing, providing access to, and facilitating the exchange of information have long been essential responsibilities of organizations in government, commercial and industrial firms, and other sectors. Many, if not most, of the systems and work processes that are currently used for these functions are based on earlier approaches that were devoted exclusively to printed information. Today's knowledge managers must contend with information in numerous formats, including structured and unstructured data. The term "Content Intelligence" refers to work processes that enable organizations to access and utilize information across the enterprise based on its content, without regard to the formats used to create or store the information.


In a world of printed, digital, and "meta" data types, that comprise everything from text to photos, video, sound, and engineering drawings, there are numerous challenges facing developers of Content Intelligence systems. This talk will discuss those challenges and a systematic approach to dealing with them. In particular, this talk will discuss the importance of defining the needs of all contributors, organizers and producers of information, cataloguing the current methodologies for accessing and exchanging information, and adopting a subject matter classification scheme as a backbone for managing content. The best solution will be the one that meets an organization's needs efficiently, reliably, and cost-effectively; thus, Content Intelligence system developers should look for opportunities to adapt existing methods and tools to the new system in addition to reviewing off-the-shelf technology products and services.

Biography: Mr. O'Brien is currently the President and a founder of Process Data Control Corporation (PDC Corp). He graduated with two undergraduate degrees from SUNY@Buffalo, Buffalo, NY: BA in Psychology, 1972, and BA in Environmental Science, 1973. He also obtained an MS in Urban and Environmental Studies from Rensselaer Polytechnic Institute, Troy, NY, 1974. Mr. O' Brien worked for municipalities (Portsmouth, NH, and Portland, ME) as a City Planner after graduating from RPI, then moved to Texas to serve as the Technical Director of the North Central Texas Council of Governments, where he worked until 1979. He then worked with two consulting engineering firms (Camp Dresser and McKee, and Brown and Caldwell) until 1988, at which time he started Process Data Control Corporation (PDC Corp) with a business partner.

Empirical Software Engineering: Why and How do we Measure?
Friday, January 25, 2008
Jeffrey Carver

Read More

Hide

Abstract: The field of Empirical Software Engineering views software engineering as a laboratory science. Our goal is to better understand the practice of software engineering through the observation and measurement of human behavior as it relates to software engineering. As such, we conduct human subject studies on various software engineering methods and techniques. In addition we mine and analyze data from existing software artifacts and repositories. This work lies at the intersection of Software Engineering and Psychology. In this talk, I will present an introduction to Empirical Software Engineering. I will discuss some of the basic concepts about conducting studies with human subjects including how to design valid studies, how to measure, and how to evaluate the quality of the results. After this introduction, I will explain my ongoing resaerch in the context of this background. My ongoing research covers the following topics: Software Architecture, Software Inspections, Software Engineering for Computational Science and Engineering, Computer Security, End-User Software Engineering, and Software Process Improvement.

Biography: Dr. Jeffrey Carver is an Assistant Professor in the Computer Science and Engineering Department at Mississippi State University. He received his PhD from the University of Maryland in 2003, under the supervision of Dr. Victor Basili. His PhD thesis was entitled "The Impact of Background and Experience on Software Inspections." His current research interests include: Empirical Software Engineering, Software Inspections, Software Architecture, Qualitative Methods, Software Process Improvement, Software Engineering for Computational Science and Engineering= and Computer Security. Dr. Carver's work has appeared in venues such as Empirical Software Engineering 96 An International Journal, CrossTalk, The International Symposium on Empirical Software Engineering (ISESE), The Conference on Software Engineering Education and Training (CSEE&T), The International Conference on Software Engineering. His work has been funded by the National Science Foundation, The Army Corps of Engineers, the Army Research Labs, and the Air Force.

Bringing Order to Chaos: Applying Autonomics to Manage Spectrum in TV White Space
Wednesday, December 05, 2007
Dave Raymer

Read More

Hide

Abstract: Dave Raymer will present a Motorola Early Stage Accelerator (ESA) project called COGNOS. COGNOS is a collaborative effort between the Motorola Integrated Systems Research Lab, Motorola Network Infrastructure Research Lab, and Motorola Corporate Standards to explore the management of spectrum utilization in TV white space using autonomics, funded by the Motorola internal venture capital organization, ESA. The purpose of this project is to explore the standards and regulatory issues related to cooperative utilization of unlicensed spectrum in the analog TV channel space within the United States. This presentation will provide an overview of the project, presenting the problem as it is currently understood, the organizations involved in the project (and the roles those organizations play as part of the project) and perceived areas of concern. The purpose of the presentation is to enable a discussion that will hopefully lead to collaboration in this space between UTA and Motorola Labs.

Biography:

NOVEL NANONOSTRUCTURED ZnO SENSORS COMPATIBLE WITH CMOS TECHNOLOGY
Friday, November 30, 2007
Agis Iliadis

Read More

Hide

Abstract: In this talk recent advances in developing novel nanostructured gas sensors and high sensitivity biosensors for proteins in the blood that are in trace levels, will be presented. The gas sensors are developed by large area self-assembled ZnO nanostructures in a copolymer matrix on (100) Si wafers. The sensors are developed on Si wafers for monolithic integration with CMOS IC technology for the read-out and signal processing functions in smart wearable tag sensors. These novel gas sensors provide effective gas sensing at room temperature with fast (10-20sec) response and recovery times, making them ideal for integration with Si CMOS technology and the development of early warning smart tag sensor arrays for environmental, hazardous, toxic, and explosive applications. The biosensors are developed in ZnO/SiO2/Si surface acoustic wave devices with enhanced sensitivity to detect trace levels of Interleukin-6 and other proteins in human serum.

Biography: Dr Agis A. Iliadis is a Professor in the Electrical and Computer Engineering Department, the Director of the Semiconductor Nanotechnology Research Laboratory, and a member of the Maryland NanoCenter of the University of Maryland at College Park, Maryland, USA. He received his M.Sc. and Ph.D. degrees in Electrical Engineering, from the Department of Electrical Engineering and Electronics, University of Manchester Institute of Science and Technology (UMIST). His expertise is in the areas of nanotechnology, sensors, semiconductor devices/circuits, and CMOS IC technology. He is a senior member of the IEEE, a Distinguished Lecturer in the IEEE-EDS Society, an AdCom member of the IEEE-EDS Technical Committee on Electronic Materials, an AdCom member of the IEEE EDS Educational Activities Committee, and the EDS Representative to IEEE-USA Professional Activities Board (PACE). He is a member of MRS, LEOS, InstPhys (UK), SPIE, TMS, DEPS, and ECS and organized and served in several Conference Committees.

Co-induction, Logic, and Infinite Computations
Thursday, November 29, 2007
Gopal Gupta

Read More

Hide

Abstract: Circular concepts have been banned from mathematics, set theory and computer science ever since Russell discovered his paradox. Co-induction has recently been introduced as a powerful technique for elegantly reasoning about infinite structures and infinite computations. In this talk we discuss the introduction of circular reasoning via co-induction into logic and logic programming. We also show how logic programming augmented with coinduction leads to more elegant solutions for difficult problems in the field of model checking, non-monotonic reasoning, boolean satisfiability, planning, real-time systems, etc.

Biography: Gopal Gupta (http://www.utdallas.edu/~gupta/) received his MS and Ph.D. in computer science from the University of North Carolina at Chapel Hill in 1987 and 1991 respectively, and his B. Tech. in Computer Science from IIT Kanpur in 1985. Currently he is a Professor of Computer Science at the University of Texas at Dallas where he also serves as the Associate Department Head. His areas of research interest are logic programming, programming languages semantics and implementation, assistive technology, AI, and parallel processing. He serves as an area editor of the journal Theory and Practice of Logic Programming, and has served in numerous conference program committees. He is a member of the Executive Council of the Association for Logic Programming, as well as a past member of the board of the European Association for Programming Languages and Systems. He has received funding from several federal (NSF, DOE, DOEd, EPA) and international (NATO, AITEC of Japan) agencies for his research projects. His research has also lead to inception of 2 early stage software companies.

Towards comprehensive strategies that meet the challenges of cyberspace science
Friday, November 16, 2007
Frederick Sheldon

Read More

Hide

Abstract: This presentation will provide a high-level overview of the Computational Science and Engineering Division (CSED) at Oak Ridge National Laboratory (ORNL) and a mid-level overview and mission of the Cyberspace Science and Information Intelligence Research (CSIIR) group as well as a brief assessment regarding the annual Cyber Security and Information Infrastructure Research Workshop (sponsored by CSED). We will talk about our strategic research thrusts as well as some details concerning partnerships in two areas LANdroids (mention), Intrinsically Assurable Resource Aware MANET (mention) and then delve into my pet projects: Heuristic Identification and Tracking of Insider Threat (preliminary results) and Cyber Security Econometrics (CSE) assisting OSD to formulate acquisition policy as it relates to NIST 800-53 (recommended security controls for federal systems).

The primary thesis of CSE is the integration of a software system??s stakeholder value propositions into the system??s definition, design, development, deployment, and evolution is critical to an IT/enterprise system??s success. This work:

1) Stems from the analysis of (sources of) cyber security mishaps, assessments and the CSIIR workshop, which show that many mishaps/failures were caught in the vise of value-insensitive defenses.
2) Discusses promising research ideas for improving our capability to apply CSE using a combination of automation and methodology.
3) Presents a (work in progress) roadmap for making progress toward validating CSE and its benefits in terms of cyber security resource allocation policy/strategy and accountability.

The relations between value based quantitative systems, risk assessment and other cyberspace research and applications areas is well founded. These relations, characterized by CSE, are unavoidably involved with software and information system product and process technology, and their interaction with human values. CSE's rationale is strongly empirical, but includes new concepts in need of stronger theory. CSE uses risk considerations to balance information assurance discipline and flexibility, and to answer other key ??how much is enough??? questions. CSE will help to illuminate information technology policy decisions by identifying the quantitative and qualitative sources of cost and value associated with candidate decisions.


Biography: Currently a senior research staff scientist at Oak Ridge National Laboratory, Sheldon has over 25 yrs in the field of software engineering and computer science. He was on the faculty at Wash. State U. (WSU), Univ. of Colorado (UCCS) and research staff at DaimlerChrysler (Stuttgart), Lockheed Martin Aeronautics Co. (LMAC, Ft. Worth), Raytheon (Dallas) and NASA Langley (both pre/post-doc NRC RA) and NASA Ames/Stanford visiting scholar. He received his Ph.D./MS in 96/89 at The U. of Texas (Arlington) while, at the same time, he lead several significant efforts at LMAC and Raytheon including Software Formal Methods for Integrated Diagnostics and lead for the YF-22 VMS Kernel (today??s F-22 has received the 2006 R.J. Collier Trophy [America??s most prestigious award for aero/space development]). He founded the Software Engineering for Secure and Dependable Systems Lab in 1999 and is a senior member of the IEEE and member of ACM, IASTED, AIAA, including Tau Beta Pi, Upsilon Pi Epsilon and received the Sigma Xi award for an outstanding dissertation. He has published over 75 papers in journals, books and conferences (http://www.ioc.ornl.gov/sheldon). Sheldon has been Co-/PI on numerous research projects concerned with the development, validation and testing of models, safety/security-critical applications, methods and supporting tools for the creation of dependable software/systems (http://www.ioc.ornl.gov) in the Cyberspace Sciences and Information Intelligence Research Group.

Modeling Cybercraft
Thursday, November 15, 2007
Dr. Ben Abbott

Read More

Hide

Abstract: The defense of large-scale information systems is critical to the functioning of our national infrastructure. While many solutions have been proposed and implemented to address this threat, current solutions tend to involve many vendor-specific tools that cannot interoperate, cannot be controlled in a consistent manner, and do not provide a methodology to reason about a network’s security capabilities.

In response to this, the Air Force Research Laboratory creating a new framework for computer system defense in which computers will be populated with distributed Cybercraft, which are simple, scalable programs designed to carry out specific, synchronized missions. This talk will describe SwRI’s approach to managing this new type of software engineering problem. Additionally, an overview of related cyber-security efforts SwRI is pursuing will be described.

Biography: As an Institute Engineer in the Communications and Embedded Systems Department, of Southwest Research, Dr. Abbott has been involved with a variety of Defense Advanced Research Projects Agency (DARPA) sponsored research efforts. Dr. Abbott’s currently active projects include earth hazard measurement using wireless sensor nodes, underwater wireless sensor nodes, model-based system synthesis for network security, and realistic implementations of software radios.

During the past fifteen years, Dr. Abbott has published over 40 technical journal and conference publications spanning the fields of real-time systems, parallel processing, Prior to joining SwRI, Dr. Abbott worked as an Assistant Professor for the Electrical and Computer Engineering Department at Utah State University, as Research Faculty at Vanderbilt University, and as a Software Engineer at Collins Radio. He has a B.S. in Computer Science from Texas Tech (1983) and a Masters and Ph.D. in Electrical Engineering from Vanderbilt University (1989, 1994).

Ongoing Research in Pervasive Computing and Opportunities for Collaboration
Wednesday, November 14, 2007
Mohan Kumar

Read More

Hide

Abstract: First, a summary of the research outcomes of the recently completed
PICO project and the ongoing PSI will be presented. In particular,
the mechanisms for modeling device features as services and the seamless
composition of services to provide complex high level services will be
presented. The development of a prototype system for composing and
miantaining services in heterogeneous pervasive environments will be
discussed. Second, there will be a discussion on ongoing research in
Pervasive Computing and Sensor Systems, and opportunities for research
collaboration.



Biography: Mohan Kumar is a Professor in Computer Science and Engineering at the
University of Texas at Arlington. His current research
interests are in pervasive computing, wireless networks and mobility,
active networks, mobile agents, and distributed computing. Recently, he
has developed or co-developed algorithms/methods for service composition
in pervasive environments, information acquisition, dissemination and
fusion in pervasive and sensor systems, caching and prefetching in
mobile, distributed, pervasive and P2P systems, and active-network
based routing and multicasting in wireless networks. He has published
over 140 articles in refereed journals and conference proceedings and
supervised several doctoral dissertations and Masters theses in the
above areas. He is a co-founder of the IEEE International Conference on
pervasive computing and communications (PerCom) - served as program
chair (2003), and general chair (2005). Kumar is one of the founding
editors of the Pervasive and Mobile Computing Journal and is on the
editorial board of The Computer Journal. He is a senior member of the
IEEE. Prior to joining The University of Texas at Arlington in 2001, he
held faculty positions at the Curtin University of Technology, Perth,
Australia (1992-2000), The Indian Institute of Science (1986-1992), and
Bangalore University (1985-1986). Kumar obtained his PhD (1992) and
MTech (1985) degrees from the Indian Institute of Science and the BE
(1982) from Bangalore University in India.

Throttling Attackers in Peer-to-Peer Media Streaming Systems
Monday, November 12, 2007
William Conner

Read More

Hide

Abstract: Many peer-to-peer media streaming applications have been developed over the past few years. PROMISE, CoopNet, CoolStreaming, and PRIME are all examples of peer-to-peer media streaming systems. Although peer-to-peer designs offer many benefits, such as reduced costs and scalability, they also offer new opportunities for misbehavior. For example, a selfish peer might request more than its fair share of bandwidth, while a malicious peer might want to intentionally exhaust all of the available upload bandwidth in the system. Due to the lack of a central server, misbehaving peers can distribute their requests throughout the system such that they appear well-behaved to each individual node, but their aggregate behavior would appear to be selfish or malicious overall. Given this vulnerability, it becomes clear that such peer-to-peer systems need effective mechanisms to throttle the aggregate bandwidth consumed by each peer in the system. Unlike client-server architectures where everything can be monitored at a central server, peer-to-peer architectures will need a more robust and scalable alternative.

In this talk, I will argue that a subset of trusted peers can collectively limit the bandwidth usage of all the other untrusted peers in the system. We refer to these trusted nodes as "kantoku" nodes. Kantoku nodes collectively provide a service that accounts for the bandwidth usage of each peer in the system. Using kantoku nodes, selfish and malicious peers attempting to exceed their fair share of bandwidth usage are throttled according to their level of abuse. Preliminary results indicate that kantoku nodes can significantly improve the streaming quality received by well-behaved nodes when malicious attackers attempting to exhaust the available upload bandwidth in the system are present.

Biography: William Conner is a Ph.D. student in the Multimedia Operating System and Networking (MONET) Research Group at the University of Illinois at Urbana-Champaign. His current research interests are large-scale distributed multimedia systems and security. William earned his B.S. at Washington University in St. Louis in 2002 and his M.S. in Computer Science from the University of Illinois at Urbana-Champaign in 2004.

Systems Biology Research Experience at Pfizer-Groton Labs
Friday, November 02, 2007
Preetam Ghosh

Read More

Hide

Abstract: In this talk, we will formulate a computational reaction model following a chemical kinetic theory approach to predict the binding rate constant for the siRNA-RISC complex formation reaction. This allows us to study the potency difference between 2-nt 3' overhangs against blunt-ended siRNA molecules. The rate constant predicted by this model will be fed into a stochastic simulation of the RNAi system (using the Gillespie stochastic simulator) to study the overall potency effect of this potential drug. The following observations will be made from the system
simulation:

1) Stochasticity in the transcription/translation machinery has no observable effects.

2) The mRNA levels jump back to saturation after a longer time when blunt-ended siRNAs are used.

3) Sustained gene silencing can be achieved only if there is a way to replenish dsRNAs in the cell.

4) Initial findings show about 3.5 times more blunt-ended molecules will be required to keep down the mRNA levels. However, the length of the silenced period is longer for blunt-ended molecules.

[This work was done while the speaker was a summer intern in the Pfizer-Groton Labs in Connecticut in Summer 2007. He also filed a patent disclosure on this work.]

Biography: Preetam Ghosh received his B.E. degree in Computer Science and Engineering from Jadavpur University, Calcutta, India, in 2000. He began his graduate study in the Department of Computer Science and Engineering at UTA in Fall 2002. He is currently a senior PhD student and plans to defend doctoral dissertation this Fall. His PhD research deals with stochastic modeling, analysis and simulation of complex biological networks and systems.
He has published extensively in high quality journals and conferences.

Wireless Sensor Networks: Applications, Systems, and Security
Friday, October 26, 2007
Dr. Richard Han

Read More

Hide

Abstract: Wireless sensor networks (WSNs) have recently attracted great interest in the computer
science research community because they present a vision of computing that is
ubiquitous, networked, embedded, and wireless. In situ deployment of widespread low
cost WSNs has spurred new research in the design of sensor operating systems, networks,
security, algorithms, HCI, databases, multimedia, etc. This talk will present my group's
research in building practical experimentally validated systems, infrastructure and
applications to demonstrate the utility of WSNs. I will discuss our award-winning
FireWxNet deployment of WSNs in the Bitterroot National Forest to monitor weather
conditions surrounding active wildland fires. I will also discuss our research on airborne
WSNs, which we call SensorFlock. If time permits, I will describe our research on secure
tree-structured routing and traffic analysis countermeasures for WSNs.

Biography: Dr. Richard Han is an Associate Professor in the Department of Computer Science at the
University of Colorado at Boulder. Prof. Han leads the MANTIS wireless sensor
networking (WSN) research project at CU-Boulder, http://mantis.cs.colorado.edu. Dr.
Han's research interests span wireless networks, embedded systems, and ubiquitous and
mobile computing. His research contributions in WSNs include: the open source Mantis
sensor operating system (MOS); the FireWxNet fire sensor network application; the XMAC
duty-cycled wireless medium access control protocol; INSENS secure treestructured
routing for WSNs; secure code distribution in WSNs; traffic analysis
countermeasures for WSNs; the NodeMD remote diagnostic system for WSNs; and the
SensorFlock airborne WSN. Dr. Han received an NSF CAREER award, IBM Faculty
awards, and Best Paper award at ACM MobiSys 2006. Prior to joining CU, he was a
Research Staff Member at IBM's T.J. Watson Research Center. He graduated with a PhD
in Electrical Engineering from the University of California at Berkeley in 1997, and with
a B.S. in Electrical Engineering, with Distinction, from Stanford University in 1989.

Handling 3D as Multimedia Objects: Issues in Storage, Retrieval, and Delivery
Wednesday, October 24, 2007
B. Prabhakaran

Read More

Hide

Abstract: 3D models and multi-attribute motion/haptic data are relatively new
forms of multimedia information. 3D models are represented by voluminous
information that describe the numerous polygonal meshes comprising the
model. 3D motion capture data, animation motions, and sensor data from
gesture sensing devices are examples of multi-attribute continuous
motion sequences. These sequences have multiple attributes rather than
only one attribute as in time series data.

Content-based retrieval and delivery of 3D models and multi-attribute
motion sequences facilitate several interesting applications in
education, training,and entertainment. "Integrated" solutions for for
content-based retrieval and delivery might help in many of these
applications. For instance, layered representation of huge 3D models not
only helps in progressive reconstruction on the client side during
delivery over a network but also can help in efficient comparison during
content-based retrieval from a 3D models repository. In this talk, we
discuss the techniques we have been working on for content-based
retrieval and delivery of 3D data.

Biography: Dr. B. Prabhakaran is an Associate Professor with the faculty of Computer
Science Department, University of Texas at Dallas. He has been working in
the area of multimedia systems : animation & multimedia databases,
authoring & presentation, resource management, and scalable web-based multimedia
presentation servers. Dr. Prabhakaran received the prestigious National
Science Foundation (NSF) CAREER Award in 2003 for his proposal on
Animation Databases. He is also the Principal Investigator for US Army
Research Office (ARO) grant on 3D data storage, retrieval, and delivery.
He has published several research papers in various refereed conferences
and journals in this area.

He has served as an Associate Chair of the ACM Multimedia Conferences in
2006 (Santa Barbara), 2003 (Berekeley, CA), 2000 (Los Angeles, CA) and
in 1999 (Orlando, FL) He has served as guest-editor (special issue on
Multimedia Authoring and Presentation) for ACM Multimedia Systems
journal. He is also serving on the editorial board of Multimedia Tools
and Applications journal, Springer Publishers. He has also served as
program committee member on several multimedia conferences and
workshops. He has presented tutorials in ACM Multimedia and other
multimedia conferences.

D. Prabhakaran has served as a visiting research faculty with the
Department of Computer Science, University of Maryland, College Park. He
also served as a faculty in the Department of Computer Science, National
University of Singapore as well as in the Indian Institute of
Technology, Madras, India.

Networking Research Activities at University of West Hungary
Friday, October 19, 2007
Karoly Farkas

Read More

Hide

Abstract: In this talk, I will briefly present my personal research activities in the last couple of years in the area of mobile ad hoc networks (MANETs).
This covers the development of a service provisioning framework, called SIRAMON, to be used in MANETs especially emphasizing the management module. As part of this module, we implemented a zone-based service management architecture in which every zone has a zone server that handles the client nodes belonging to the given zone. To select and maintain the zone servers, we have developed a distributed algorithm based on Dominating Set computation. To designate the most powerful nodes to act as servers, we have also developed a weight computation mechanism. To create a stable zone server set, we use prediction and try to assess the link quality changes and thus the network topology variations.

Additionally, I will give an overview about our new research project dealing with the development of sustainable applications based on mobile sensors and networks. Finally I will sketch some collaboration possibilities.

Biography: Dr. Karoly Farkas received the Ph.D. degree in Computer Science in 2007 from ETH Zurich, Switzerland, and the M.Sc. degree in Computer Science in 1998 from Technical University of Budapest (TU Budapest), Hungary.
Currently he is working as an Associate Professor at University of West Hungary, Sopron, Hungary.

His research interests cover the field of communication networks, especially autonomic, self-organized and wireless mobile ad hoc networks.
His core research area deals with service provisioning in mobile ad hoc networks. He has published more than 30 scientific papers in various journals, conferences and workshops and has given invited talks.

He has supervised a number of student theses, participated in several research projects, coordinated the preparation of an EU IST research project proposal and acted as reviewer and organizer of numerous scientific conferences. He served in the program committee of the IADAT-tcn conference in 2005 and 2006, and he acts as technical co-chair of the 3rd WICON (Wireless Internet) conference in 2007. He is member of the IEEE and fellow of the European Multimedia Academy.

Did Somebody say 'Eureka'? - Towards a greater understanding of how technologies are born and evolve.
Wednesday, October 17, 2007
Dr. V. P. Kochikar

Read More

Hide

Abstract: The task of predicting the successful emergence of a technology presents formidable challenges. Be that as it may, there are certain flaws in our view of how technology evolves that make the task appear more daunting than it has to be. We explain one such shortcoming – the Creative Burst Fallacy, and analyze how the resulting overemphasis on the role of creativity in technological innovation has hobbled our ability to foresee emerging technologies. We also present 4 mechanisms by which technologies are born, none of which place a particular premium on creativity or serendipity.

Biography: Dr. V. P. Kochikar is Assoc. Vice President of Infosys Technologies

Can You Make BIG Bucks from Your Research?
Thursday, October 11, 2007
Ron Jennings

Read More

Hide

Abstract: This talk is an overview of the characteristics and concepts involved with commercializing university research innovations from a venture capital perspective. Among the subjects covered are, what does it take to become an entrepreneur? Phases of starting and maturing a successful company, technology areas of interest for venture investment, how to develop and sell a great idea, how ideas are evaluated for investment, and challenges in moving from the lab to disruptive products.

Biography: Mr. Ron Jennings joined STARTech in 2000 as a partner and chief technical officer. He has over 25 years of experience in technical and executive roles in a variety of industries including telecommunications, supercomputers, electric power, oil and gas exploration, defense, health care, and broadcasting. Ron is responsible for supporting a broad range of technology sector initiatives as well as managing STARTech's information systems infrastructure. Most recently, Ron served as vice president of development at TelOptica, Inc., an early stage company specializing in optimization of telecommunications networks. Prior to that, he was vice president of development at Orametrix, Inc., another Dallas start-up company. He has also worked for Hewlett-Packard, Convex Computer Corporation, and Texas Instruments where he was a senior member of technical staff. Ron holds a bachelor of science in electrical engineering from Michigan State University and is a Texas registered professional engineer. He is also a holder of six patents.


Connected Dominating Set in Wireless Networks
Wednesday, October 10, 2007
Ding-Zhu Du

Read More

Hide

Abstract: The connected dominating set plays an important role in wireless
networks and hence gains a lot of attention in approximation algorithm
design. In this talk, we study greedy approximations for
computing the minimum connected dominating set and solution for an
long-standing open problem in greedy algorithm design and analysis.
The main result comes from a recent paper of Du, Graham, Pardalos, Wan, Wu and zhao, which will appear in SODA'08.

Biography: Dr. Ding-Zhu Du recently joined UT Dallas after being a Program Director
at National Science Foundation 2002-2005. He was a professor at Department of Computer Science and Engineering, University of Minnesota 1991-2005.
Dr. Du received his M.S. degree in 1982 from Institute of Applied Mathematics, Chinese Academy of Sciences, and his Ph.D. degree in 1985 from
the University of California at Santa Barbara. He worked at
Mathematical Sciences Research Institute, Berkeley in 1985-86,
at MIT in 1986-87, and at Princeton University in 1990-91, and a professor at City University of Hong Kong in 1998-1999, a research professor at Institute of Applied Mathematics, Chinese Academy of Sciences in 1987-2002.
Currently, he is a professor at Department of Computer Science,
University of Texas at Dallas and the Dean of Science
at Xi'an Jiaotong University. His research interests include
combinatorial optimization, communication networks, and theory of computation.
He has published more than 140 journal papers and 10 written books.
He is the editor-in-chief of Journal of Combinatorial Optimization and book series on Network Theory and Applications. He is also in editorial boards of more than 15
journals. He is well-known for proving the Gilbert-Pollak conjecture
on the Steiner ratio, the Derman-Leiberman-Ross conjecture on
optimal consecutive 2-out-of-$n$ systems in reliability, and the global
convergence of Rosen gradient projection method in nonlinear programming.
In 1998, he received received CSTS Prize from INFORMS (a merge of
American Operations Research Society and Institute of Management Science)
for research excellence in the interface between Operations Research and
Computer Science. In 1996, he received the 2nd Class National Natural Science Prize in China. In 1993, he received the 1st Class Natural Science Prize from
Chinese Academy of Sciences.

Computational Musicology
Wednesday, October 03, 2007
Hari Sahasrabuddhe

Read More

Hide

Abstract: This talk is an overview of our exploration of the mathematical and computational structures with which we can model Hindustani classical music (the art music of Northern India - HCM) as it is practiced today.

Musical performance can be studied in the form of notation or sound recording. Our work attempts to build models based on notation. Combining a finite-state model of Raga and algorithmic models of improvisation lead to sophomoric performances. The next goal of finding semantics of melody has proved far more challenging. Our continuing efforts in this direction will be discussed.

Biography: Dr. Sahasrabuddhe received his B.Tech. degree in Electrical Engineering from IIT Bombay, and M.Tech. and Ph.D. degrees from IIT Kanpur. He held faculty positions in Computer Science at IIT Kanpur, University of Pune and IIT Bombay. He was also a visiting faculty at University of Nebraska-Lincoln and University of Waterloo. A strong advocate of computer science education in India, Prof. Saharsrabuddhe has participated in the evolution of a Distance Education Technology at IIT Bombay. His main contribution is the design of the audio paths in the entire process of interactive telecast and recording of sessions. His research in the last 20 years has focused on computational musicology. More recently, with his students he developed a new theory of tuning in Indian Classical Music and verified it with the help of renowned musicologist Pandit Babanrao Haldankar. He is a member of Computer Society of India, Indian Science Congress, and Indian Musicology Society.

An Economic Framework for Dynamic Spectrum Access and Service Pricing
Monday, July 16, 2007
Mainak Chatterjee

Read More

Hide

Abstract: In this talk, we will understand what Dynamic Spectrum Access (DSA) is and why is there such a thrust by federal agencies and the community at large. We will discuss the concept of DSA that will allow the radio spectrum to be traded in a market like scenario allowing wireless service providers (WSPs) to lease chunks of spectrum on a short-term basis. Currently, there is little understanding on how such a dynamic trading system will operate so as to make the system feasible under economic terms. Therefore, consistent economic models must be used to guide the dynamic spectrum allocation process and the resource management algorithms that the providers use such that there are sufficient incentives for the providers to provide better and newer services.
We will analyze the overall system (i.e., spectrum allocation and interaction of end users with the WSPs) from an economic point of view.
We will propose an auction model that dynamically allocates spectrum to the WSPs based on their bids and maximizes revenue and spectrum usage.
We will borrow techniques from game theory to capture the competition among WSPs and propose a dynamic pricing strategy; existence of price equilibrium will also be shown. Some preliminary results, obtained through simulations, will also be presented.

Biography: Dr. Mainak Chatterjee is an Assistant Professor of School of Electrical Engineering and Computer Science at the University of Central Florida in Orlando. He received his Ph.D. degree from the Department of Computer Science and Engineering at The University of Texas at Arlington in 2002.
Prior to that, he completed B.Sc. with Physics (Honors) from the University of Calcutta in 1994 and M.E. in Electrical Communication Engineering from the Indian Institute of Science, Bangalore, in 1998.

He is a recipient of Young Investigator Award from Air Force Office of Scientific Research (AFOSR). At UTA, he was recognized with the Outstanding CSE Dissertation Award in 2002. Dr. Chatterjee's research interests include economic issues in wireless networks, applied game theory, resource management and quality-of-service provisioning, ad hoc and sensor networks, CDMA data networking, and link layer protocols.
He serves on the executive and technical program committee of several international conferences.

Web: http://www.eecs.ucf.edu/~mainak

Bayesian Reinforcement Learning
Thursday, March 29, 2007
Mohammad Ghavamzadeh

Read More

Hide

Abstract: Policy gradient methods are reinforcement learning algorithms that adapt a parameterized policy by following a performance gradient estimate. These algorithms have recently received considerable attention as a means to sidetrack problems of partial observability and policy oscillation and even divergence encountered in value function-based reinforcement learning methods. This talk will present two Bayesian policy gradient algorithms. These algorithms use Gaussian processes to define prior distribution over the performance gradient, and obtain closed-form expressions for its posterior distribution, conditioned on the observed data. The posterior mean serves as the policy gradient estimate, and is used to update the policy, while the posterior covariance allows us to gauge the reliability of the update. This reduces the number of samples needed to obtain accurate gradient estimates. In the first algorithm, the basic observable unit, upon
which learning and inference are based, is a complete trajectory, allowing the algorithm to handle non-Markovian systems. The second algorithm takes advantage of the Markov property of the system trajectories and uses individual state-action-reward transitions as its basic observable unit. This helps reduce variance in the gradient estimates, and facilitates handling problems with long trajectories.

Biography: Mohammad Ghavamzadeh received a Ph.D. degree in Computer Science from University of Massachusetts Amherts in 2005. Since September 2005 he has been a postdoctoral fellow at the Department of Computing Science at University of Alberta. His research interests lie primarily in
Artificial Intelligence and Machine Learning, with emphasis on decision making under uncertainty using principled mathematical tools from probability theory, decision theory, and statistics. His current research is mostly focused on using recent advances in statistical machine learning, especially Bayesian reasoning and kernel methods, to develop more efficient reinforcement learning algorithms

Protein remote homology inference and multiple sequence alignment
Wednesday, March 28, 2007
Jimin Pei

Read More

Hide

Abstract: Information about protein structure, function and evolution can be derived from remote homology inference. Multiple sequence alignments have broad applications in homology inference, structure modeling and evolutionary analysis. We developed PROMALS, a multiple alignment method that combines sequence database searches, structure prediction, and profile-profile hidden Markov models to improve the alignment quality of distantly related proteins. Tested on several datasets, PROMALS shows better results than other leading alignment methods. PROMALS alignments were used to detect proteins potentially involved in gamete membrane fusion.

Biography: Jimin Pei received his B.S. degree from University of Science and Technology of China in 1999, and his Ph.D. degree in Molecular Biophysics from University of Texas Southwestern Medical Center in 2004. His Ph.D. work is about combining evolutionary and structural information for protein homology inference, sequence alignment and structure prediction. Since then, he has been a post-doctoral fellow in Department of Biochemistry and Howard Hughes Medical Institute in UT Southwestern.

Warp Processing: Making FPGAs Ubiquitous via Invisible Synthesis
Monday, March 26, 2007
Greg Stitt

Read More

Hide

Abstract: FPGAs are an increasingly popular type of integrated circuit that for many applications can provide 10x, 100x, or even 1000x speedups compared to microprocessors. However, FPGAs are not a mainstream technology due to the expertise required to create custom circuits. Although high-level synthesis techniques have been introduced to ease FPGA development, such techniques have yet to achieve commercial success due in part to the difficulty of incorporating such approaches into software tool flows. Thus, we introduced warp processing as a transparent synthesis approach that hides the FPGA from a software developer by performing dynamic on-chip synthesis. From a software developer's point of view, a warp processor looks identical to a standard microprocessor, and therefore requires no additional programming effort while resulting in the performance improvements obtained from FPGA circuits. Warp processing also enables dynamic hardware optimizations, expandable FPGAs, custom accelerators for multi-threaded applications, and custom communication for multi-core architectures. Initial results show that warp processors can achieve speedups ranging from 5x to over 500x compared to current multi-core architectures.

Biography: Greg Stitt is a Ph.D. candidate at the University of California, Riverside. His research interests include embedded systems, synthesis, compilers, reconfigurable computing, hardware/software co-design, and architecture.

Warp Processing: Making FPGAs Ubiquitous via Invisible Synthesis
Monday, March 26, 2007
Greg Stitt

Read More

Hide

Abstract: FPGAs are an increasingly popular type of integrated circuit that for many applications can provide 10x, 100x, or even 1000x speedups compared to microprocessors. However, FPGAs are not a mainstream technology due to the expertise required to create custom circuits. Although high-level synthesis techniques have been introduced to ease FPGA development, such techniques have yet to achieve commercial success due in part to the difficulty of incorporating such approaches into software tool flows. Thus, we introduced warp processing as a transparent synthesis approach that hides the FPGA from a software developer by performing dynamic on-chip synthesis. From a software developer's point of view, a warp processor looks identical to a standard microprocessor, and therefore requires no additional programming effort while resulting in the performance improvements obtained from FPGA circuits. Warp processing also enables dynamic hardware optimizations, expandable FPGAs, custom accelerators for multi-threaded applications, and custom communication for multi-core architectures. Initial results show that warp processors can achieve speedups ranging from 5x to over 500x compared to current multi-core architectures.

Biography: Greg Stitt is a Ph.D. candidate at the University of California, Riverside. His research interests include embedded systems, synthesis, compilers, reconfigurable computing, hardware/software co-design, and architecture.

Enabling Data Retrieval: by Ranking and Beyond
Friday, March 23, 2007
Chengkai Li

Read More

Hide

Abstract: Database management systems (DBMSs) are facing challenges in supporting
non-traditional data retrieval for emerging applications. We need
retrieval systems over data, much like a "Google" for databases,
parallel the well-established information retrieval over text. Such
systems should allow users to form flexible and intuitive queries
capturing their information needs, and to explore the databases
effectively. In the talk, I will discuss this exciting research area and
introduce my work in this direction. In particular I will present
RankSQL, a DBMS that provides a systematic and principled framework for
ranking by extending relational algebra. I will further introduce our
work on ranking aggregate queries. Effective data retrieval mechanisms
go beyond just ranking. I will discuss our proposal of generalizing
Group-By to clustering, parallel to the generalization from Order-By to
ranking, and combining the two constructs. Moreover, I will briefly
mention our study of inverse ranking queries.

Biography: Chengkai Li is a Ph.D. candidate in the Department of Computer Science,
University of Illinois at Urbana-Champaign. His general research
interests are in the field of databases, with current focus on data
retrieval. He also works on Web information management and XML.
Chengkai received a B.S. and a M.E. in Computer Science from Nanjing
University.

Human Activity Language
Friday, March 09, 2007
Gutemberg Guerra-Filho

Read More

Hide

Abstract: We propose a linguistic framework for the modeling and learning of human activity representations. This framework is a novel learning approach with implications to important theoretical issues (e.g. language origin, conceptual grounding) and applications to several problems. The Human Activity Language (HAL) represents the sequential and parallel aspects of human movement with perceptual and generational properties. HAL consists of kinetology, morphology, and
syntax. Kinetology, the phonology of human movement, finds basic primitives for human motion (segmentation) and associates them with symbols (symbolization). We also introduce five principles on which kinetology should be based and evaluated. This way, kinetology provides a symbolic representation for human movement that allows synthesis, analysis, and symbolic manipulation. Kinetology also has applications in compression, decompression, and indexing of motion data. The morphology of a human action is related to the inference of essential parts of the movement (morpho-kinetology) and its structure (morpho-syntax). In order to learn the morphemes and their structure, we present a grammatical inference methodology and introduce a parallel learning algorithm to induce a grammar system representing a single action. In practice, the morphology is concerned with the construction of a praxicon, a lexicon of actions, to aid an artificial cognitive system or a computer animation system. The syntax of human activities involves the construction of motor sentences using action morphemes. A sentence may range from a single action morpheme (nuclear
syntax) to a sequence of sets of morphemes. A single morpheme is decomposed into analogs of lexical categories: nouns, adjectives, verbs, and adverbs. Sets of morphemes represent simultaneous actions (parallel syntax) and a sequence of movements is related to the concatenation of activities (sequential syntax). Our approach is able to address several problems related to data-driven computer animation. Nuclear syntax, especially adverbs, is related to the motion interpolation problem, parallel syntax addresses the slicing problem, and sequential syntax is proposed as an alternative method to the transitioning problem. In computer vision, surveillance relies on automatic activity detection and recognition based on action representations. In Humanoid Robotics, adequate movement models are detailed domain knowledge for the solution of complex nonlinear dynamics problems related to motor coordination. This results in skill acquisition and behavior programming. A further extension of our learning framework leads towards object recognition and multi-modal multimedia annotation.

Biography: Gutemberg Guerra Filho is a Ph.D. Candidate in the Computer Science Department at the University of Maryland, College Park. He received a
M.Sc. in Computer Science from the University of Maryland and from the State University of Campinas, Brazil, in 2006 and 1998, respectively. As a member of the Computer Vision Laboratory, his current research involves the development of a sensory-motor language for human
activity understanding. This linguistic learning framework has
applications in Computer Vision and Graphics, Humanoid Robotics, Multi-Modal Multimedia, Data Mining, and Artificial Intelligence. His other research interests are Computational Geometry, Spatial
Databases, and Combinatorial Optimization.

Protocols for Efficient Data Authentication
Wednesday, March 07, 2007
Nikos Triandopoulos

Read More

Hide

Abstract: We consider the problem of authenticating data in untrusted or adversarial computing environments: when the distributor of the data is not the source of the data, and thus is not trusted by the end user, how can data received be proven authentic? Data authentication constitutes a new dimension in data management and data structure design. At the same time, the problem captures the security needs of many computing applications that exchange and use sensitive information in hostile distributed environments and its importance increases given the trend in modern system design towards decentralized architectures with minimal trust assumptions. In this talk, we focus on the design of data structures and protocols that allow the secure and efficient authentication of dynamic data maintained by an untrusted entity (not the data creator), supporting the correctness verification of operations performed on the data. We first present a new technique for distributed data authentication, showing how data stored and retrieved over a peer-to-peer network can be efficiently validated. Based on the design of an efficient distributed authentication tree, our approach provides reliable distributed storage, secure against replay attacks and consistent with the update history. We also describe a new framework for authenticating general queries on structured data, satisfying important properties in terms of expressiveness and complexity. We discuss how this framework can be applied for the efficient verification of operations on a file system that is outsourced to an untrusted server and conclude with some interesting research directions.

Biography: Nikos Triandopoulos is a postdoctoral research fellow at the Institute for Security Technology Studies at Dartmouth College. He received his diploma in Computer Engineering and Informatics at the University of Patras, Greece, in 1999 and his Sc.M. in Computer Science at Brown University in 2002. Nikos completed his Ph.D. in Computer Science at Brown in 2006 and his dissertation studies the problem of authenticating information in hostile and adversarial computing environments. His primary research interests are in information security, cryptography and algorithms. He has been a recipient of the Kanellakis Fellowship and the Technological Innovation Award from Brown University.

Learning Embeddings for Similarity-Based Retrieval
Monday, March 05, 2007
Vassilis Athitsos

Read More

Hide

Abstract: Similarity-based retrieval is the task of identifying database patterns that are the most similar to a query pattern. Retrieving similar patterns is a necessary component of many practical applications, in fields as diverse as computer vision, speech recognition, and bioinformatics. This talk presents BoostMap, a method for efficient similarity-based retrieval in spaces with computationally expensive distance measures. Our method constructs embeddings that map database and query patterns into a vector space with a computationally efficient distance measure. Using such a mapping, similar patterns can be retrieved efficiently - often orders of magnitude faster compared to retrieval using the original distance measure. In the BoostMap method, embedding construction is treated as a machine learning problem, and embedding quality is optimized using information from training data. A key property of the learning-based formulation is that the optimization criterion does not depend on geometric properties and is equally valid in both metric and non-metric spaces. In experiments with several datasets, our method compares favorably to alternative methods for efficient retrieval, and provides highly competitive results for applications such as handwritten character recognition and time series indexing.

Biography: Dr. Athitsos received the BS degree in mathematics from the University of Chicago in 1995, the MS degree in computer science from the University of Chicago in 1997, and the PhD degree in computer science from Boston University in 2006. In 2005-2006 he worked as a researcher at Siemens Corporate Research, developing methods for database-guided medical image analysis. Since October 2006 he is a postdoctoral research associate at the Computer Science department at Boston University. His research interests include computer vision, machine learning, and data mining. His recent work has focused on efficient similarity-based retrieval, gesture recognition, shape modeling and detection, and medical image analysis.

Adaptive Representations for Reinforcement Learning
Monday, February 26, 2007
Shimon Whiteson

Read More

Hide

Abstract: In reinforcement learning, a computer, robot, or other agent seeks an
effective behavioral policy for tackling a sequential decision task.
One limitation of current methods is that they typically require a
human to manually design a representation for the solution (e.g. the
internal structure of a neural network). Since poor design choices
can lead to grossly suboptimal policies, agents that automatically
adapt their own representations have the potential to dramatically
improve performance. This talk introduces two novel approaches for
automatically discovering high-performing representations. The first
approach, called evolutionary function approximation, uses
evolutionary methods to optimize representations for neural network
function approximators. Hence, it evolves agents that are better
able to learn. The second approach, called adaptive tile coding,
begins with coarse representations and gradually refines them during
learning, analyzing the current policy and value function to deduce
the best refinements. Empirical results in multiple domains
demonstrate that these techniques can substantially improve
performance over methods with fixed representations.

Biography: Shimon Whiteson is a doctoral candidate and assistant instructor in the Department of Computer Sciences at The University of Texas at Austin. His research focuses on reinforcement learning for real- world domains that are continuous and stochastic. For his thesis, he developed methods to improve the performance of function approximators for temporal difference methods by automatically optimizing their internal representations. In 2006, he received an IBM PhD Fellowship and two Best Paper Awards at the GECCO-06 conference. He plans to graduate in May 2007.

Protein Interaction Module Detection Using Matrix-based Graph Algorithms
Wednesday, February 21, 2007
Chris Ding

Read More

Hide

Abstract: Proteins carry out most cellular processes as protein modules. Systematic detection of protein functional modules provides essential knowledge linking proteome dynamics to cellular functions. We describe two matrix-based graph algorithms for computing protein modules: spectral clustering and clique/biclique algorithms. Matrix-based learning algorithms is going through a Renaissance period in recent years, and is shaping up as a significant new direction. We outline several fundamental advances in the field. Applying these algorithms to Yeast, Pyrococcus, Sulfolobus, Halobacterium interaction networks, we obtain a large number of protein interaction modules. Some of these discovered protein complexes have been experimentally verified by our collaborators. We discuss the biological significance of the discovered protein modules; A number of uncharacterized proteins are found to be new members of important protein complexes.

Biography: Dr. Chris Ding is a staff computer scientist at Lawrence Berkeley National Laboratory. His research focus on bioinformatics, data mining, information retrieval, and high performance computing. He earned a Ph.D. from Columbia, worked at Caltech and Jet Propulsion Lab before joining Berkeley Lab in 1996. He served on several NSF review panels, program committees of many data mining and bioinformatics conferences, and editorial board of Int'l J. Data Mining and Bioinformatics.

Comparative Analysis of Molecular Interaction Networks
Tuesday, February 20, 2007
Mehmet Koyuturk

Read More

Hide

Abstract: Emergence of high-throughput experiments and resulting databases capture relationships and interactions between biomolecules. These interactions enable modeling and analysis of a cell from a systems perspective - generally using network models. In this talk, we focus on development of computational tools and statistical models for comparative analysis of molecular interaction networks. We first discuss the problem of identifying conserved sub-networks in a collection of interaction networks belonging to diverse species. The main algorithmic challenges here stem from the NP-hard subgraph isomorphism problem that underlies frequent subgraph discovery. Three decades of research into theoretical aspects of this problem has highlighted the futility of syntactic approaches, thus motivating use of semantic information. Using a biologically motivated homolog contraction technique for relating proteins across species, we render this problem tractable. We experimentally show that the proposed method can be used as a pruning heuristic that accelerates existing techniques significantly, as well as a standalone tool that conveys significant biological insights at near-interactive rates. With a view to understanding the conservation and divergence of modular substructures, we also develop network alignment techniques, grounded in theoretical models of network evolution. In order to assess the statistical significance of the patterns identified by our algorithms, we probabilistically analyze the distribution of highly connected and conserved subgraphs in random graphs. Our methods and algorithms are implemented on various platforms and tested extensively on a comprehensive collection of molecular interaction data, illustrating their effectiveness in terms of providing novel biological insights as well as computational efficiency. This is joint work with Yohan Kim, Shankar Subramaniam (University of California, San Diego), Wojciech Szpankowski, and Ananth Grama (Purdue University) and is supported by the National Institutes of Health.

Biography: Dr. Koyuturk received his B.S. (1998) and M.S. (2000) degrees in Electrical and Electronics Engineering, and Computer Engineering, respectively, from Bilkent University, Turkey. During his graduate studies at the Department of Computer Science at Purdue University, he worked on a number of problems in the areas of Computational Biology and Bioinformatics, Parallel and Distributed Computing, and Scientific Computing. His thesis focused on algorithmic and analytical aspects of comparative analysis of biological networks. His collaborations with domain experts in this area resulted in several significant publications and software tools. Since receiving his Ph.D. in August 2006, he has been a post-doctoral research associate in the same department.

Tentative Talk Title
Wednesday, December 20, 2006
Tentative Speaker

Read More

Hide

Abstract: The department's graduates work at America's leading companies and governmental agencies and in other sectors. UTA's location in the nation's second most influential technology corridor - the Dallas-Fort Worth metroplex - and strong relationship with major technology companies such as Raytheon, Lockheed Martin, Nokia, Sabre Holdings and Motorola provide students with outstanding opportunities for internships and jobs.

Biography: The Department of Computer Science and Engineering is noted for research and teaching excellence. Its internationally recognized faculty members are engaged in breakthrough research across the leading areas of computer science and engineering.

Herding Micro-Robots
Tuesday, December 12, 2006
Igor Paprotny

Read More

Hide

Abstract: I begin by describing a mobile, untethered, electrostatic micro-robot,
with dimensions 250 um x 60 um (Donald et al., 2005), developed in
joined work in our group. This micro-robot consists of a curved
cantilever steering arm attached to an untethered scratch-drive actuator
(USDA). Both the steering arm and the USDA are electrostatically
powered, and can be operated independently using a global power delivery
signal, commanding the robot to move straight or turn. These two
motion-primitives enable the device to move anywhere within a planar
operating environment. I will then present our ongoing work to extend
the functionality of our micro-robots to enable concurrent operation of
multiple devices within the same power-delivery substrate. I will
introduce new concepts and experimental results that enable us to
simultaneously control many micro-robots within the same operating
environment.

Biography: Igor Paprotny received the B.S. degree in Mechatronics from NKI College
of Engineering, Oslo, Norway, and the B.S.E. and M.S.E. degrees in
Industrial Engineering from Arizona State University. He is currently a
Ph.D. candidate at Dartmouth Computer Science Department in Hanover,
NH. His current research interests lie in the area of micro-robotics,
focusing on design and implementation of micro-robotic systems using
microfabrication technologies.
His past research agenda includes the optimization of the discrete-event
simulation process and layout design of semiconductor factories. He
spent a total of three years working in the Semiconductor industry,
primarily designing Fab-wide automated material-handling systems.

Micromobility Issues in IP Networks
Friday, December 08, 2006
Akkihebbal L. Ananda

Read More

Hide

Abstract: Mobile IP, the current standard for IP Mobility, was designed for environments where mobility is an exception not a norm. With the advances in technology, availability of devices capable of connecting to multiple networks simultaneously, and the WLAN revolution, there is a growing need to support seamless mobility for wireless Internet access.
Micromobility protocols address the important issue of seamless handoff at the network layer to provide a comprehensive mobility management solution. Several micromobility mechanisms have been proposed over the past decade, that essentially employ a proxy based approach to hide user mobility from global peers. In this work we discuss two protocols, one each for IPv4 and IPv6 networks to address the micromobility issue.
While the IPv6 based approach, called Auto-update Micromobility (AUM) takes an end-to-end approach to host mobility, the IPv4 approach, called Reverse ICMP Redirect (RIR), presents a simple and scalable solution for location management. The AUM protocol was implemented in Linux 2.4.22 kernel and extensive performance analysis was conducted. The results show remarkable improvements in transport and application layer performance, with reduced handoff durations and improved throughput even when the node exhibits high mobility.

Biography: Dr. Akkihebbal L. Ananda is an Associate Professor in the Computer Science Department of the School of Computing at the National University of Singapore. His research areas of interest include transport protocols, IP mobility, IPv4 and IPv6 transition mechanisms, and wireless and sensor networks.

Dr. Ananda obtained M.Tech degree in Electrical Engineering from the Indian Institute of Technology, Kanpur in 1973, and M.Sc and Ph.D degrees in computer science from the University of Manchester, UK, in
1981 and 1983 respectively.

Currently Dr. Ananda is spending his sabbatical in CSE@UTA.

Business Intelligence: Business-Driven Analytic Solutions
Tuesday, December 05, 2006
Jorge Ramirez

Read More

Hide

Abstract: The products have all the cachet, but the technology doesn't stop where your iTunes ends. Apple Computer is the leading credit card fraud prevention company in the country. Data mining is the reason why. Find out what other companies are missing out on, because they don't have a data mining scientist leading their business intelligence efforts. Science and technology are helping to drive the increased profitability of this reinvented computer company, and NOT just on the product side. Come find out how.

Biography: Dr. Jorge C. G. Ramirez has over 20 years of research, development and leadership experience in software engineering, data mining and intelligent systems. He has led domestic and international development projects in academia and companies ranging from startups with less than 30 employees to large teams in Fortune 500 companies. Dr. Ramirez’s experience spans multiple areas of expertise from tool development, information systems, and software engineering to artificial intelligence applications, data mining, and business intelligence for a variety of industries including defense, medicine, education, healthcare, insurance, state and federal governments, and most recently high-tech. Dr. Ramirez received his B.S. from Georgia Tech, M.S. from Louisville and Ph.D. from UTA. Additionally, he is a former faculty member in the CSE department, and is now the Senior Scientist for Apple Computer (Cupertino, CA) in their Worldwide Operations Business Intelligence Group and resides in Santa Cruz County, CA.

Data Mining for Malicious Code Detection and Security Applications
Friday, December 01, 2006
Bhavani Thuraisingham

Read More

Hide

Abstract: Data mining is the process of posing queries and extracting patterns, often
previously unknown from large quantities of data using pattern matching or other
reasoning techniques. Data mining has many applications in security including
for national security as well as for cyber security. The threats to national security
include attacking buildings, destroying critical infrastructures such as power grids
and telecommunication systems. Data mining techniques are being investigated
to find out who the suspicious people are and who is capable of carrying out
terrorist activities. Cyber security is involved with protecting the computer and
network systems against corruption due to Trojan horses, worms and viruses.
Data mining also provide solutions in intrusion detection, auditing, credit card
fraud detection and biometrics related applications. Other applications include
data mining for malicious code detection such as worm detection and managing
firewall policies. The challenge is to reduce false positives and false negatives.
Additionally, we need to maintain the privacy of individuals. Much research has
been carried out on privacy preserving data mining.
This presentation will provide an overview of data mining, the various
types (real-time and non-real-time) of threats and then discuss the applications of data mining for malicious code detection and cyber security. We will also discuss
the consequences to privacy.

Biography: Dr. Bhavani Thuraisingham joined The University of Texas at Dallas in 2004 as a
Professor of Computer Science and Director of the Cyber Security Research
Center in the Erik Jonsson School of Engineering and Computer Science. She is
an elected Fellow of the IEEE for her work in data security. She received the
IEEE Computer Society.s prestigious 1997 Technical Achievement Award for
outstanding and innovative contributions to secure data management.
Her work in information security and information management has
resulted in over 70 journal articles, over 200 refereed papers in conferences and
workshops, and three US patents. She is the author of seven books in data
management, data mining and data security. She has given over 30 keynote
presentations at various technical conferences and has also given invited talks at
the White House Office of Science and Technology Policy and at the United
Nations on Data Mining for counter-terrorism. She currently serves as the Editor
in Chief of Computer Standards and Interfaces Journal.
Prior to joining UTD, Dr. Thuraisingham was at NSF and MITRE
Corporation. At NSF she established the Data and Applications Security Program
and co-founded the Cyber Trust theme and was involved in inter-agency
activities in data mining for counter-terrorism. She joined MITRE in 1989, worked
in the Information Security Center and was later a department head in Data and
Information Management as well as Chief Scientist in Data Management. She
has served as an expert consultant in information security and data management
to the Department of Defense, the Department of Treasury and the Intelligence
Community for over 10 years. Her industry experience includes six years of
research and development at Control Data Corporation and Honeywell Inc.
Dr. Thuraisingham was educated in the United Kingdom both at the
University of Bristol and at the University of Wales.

To Create New Internet Architectures and Distributed Systems
Wednesday, October 18, 2006
Guru Parulkar

Read More

Hide

Abstract:

Biography: Dr. Parulkar is driving the technical direction of the Global Environment for Networking Innovations, or GENI, initiative, which is aimed at creating the future Internet, with help from the broader research community. His lecture will be on the GENI initiative and its work to create new Internet architectures and distributed systems, with discussion also on how researchers can participate.

Prior to joining NSF, Dr. Parulkar spent several years in the Silicon Valley involved in high-tech startups. Among his multiple ventures, he co-founded Growth Networks and served as its CTO and director. Growth Networks was acquired by Cisco Systems.

He is a former professor of computer science at Washington University and served as director of its Applied Research Laboratory and led large multi-investigator systems projects in gigabit networking, next generation Internet, multimedia systems and networking, active networking, and network measurement and visualization. He received his Ph.D. in computer science from the University of Delaware in 1987.

High-Performance Data Broadcasting and Packet Forwarding in Wireless Meshes
Friday, October 13, 2006
Archan Mishra

Read More

Hide

Abstract: While wireless mesh networks are gaining popularity as a broadband access
alternative in urban and rural communities, the relatively low network
throughput for latency-sensitive traffic remains a potential bottleneck.
In the first part of this talk, we'll address the case of broadcast data
traffic in such meshes, and focus on two specific features of such mesh
architectures: i) the ability of nodes to dynamically adjust their link
transmission rate and ii) the emerging popularity of multi-radio,
multi-channel mesh nodes. For the current IEEE802.11a/b/g standards, rate
adjustment on individual links is limited to unicast transmissions.
We'll show that exploitation of such multi-rate capability for broadcast
traffic offers significant benefits, and first present multi-rate routing
algorithms that can lower the broadcast latency for single-radio mesh nodes
by as much as 60%. Subsequently, we'll consider the case of multi-radio
mesh nodes and present modified broadcasting algorithms that adapt to the
degree of available radio parallelism to reduce the broadcast latency by
another ~20-30%.

In the second part of the talk, we'll focus on the 'packet forwarding'
problem in a wireless mesh environment and present an efficient interface
contained forwarding (ICF) architecture for a "wireless router", i.e.,
a forwarding node with a single wireless NIC. To avoid the twin latency
overheads of host-lookup and channel access, we present a slightly modified
version of the 802.11 MAC, called DCMA, that uses MPLS-like labels in the
control packets to forward the packet in an atomic, pipelined fashion.
Simulation studies will demonstrate that these techniques can extend the
operating capacity of wireless meshes by ~30%.

Biography: Dr. Archan Misra is a Senior Researcher with the Autonomic Systems and
Networks Department at the IBM TJ Watson Research Center, Hawthorne, NY.
He received Ph.D. degree in Electrical and Computer Engineering from the
University of Maryland at College Park in May, 2000, and B.Tech degree
in Electronics and Communication Engineering from IIT Kharagpur, India
in July 1993. At IBM, for the past 5 years, he has been working on and
leading projects on pervasive infrastructures and protocols, including
algorithms and architectures for SIP-based collaborative applications
high-performance wireless mesh networks, and data management for
sensor-based applications. He has published extensively in the areas
of wireless networking, pervasive services and mobility management
and is a co-author on papers that received the Best Paper awards in
ACM WOWMOM 2002 and IEEE MILCOM 2001. He is currently on the editorial
board of the IEEE Wireless Communications Magazine, and chairs the IEEE
Computer Society.s Technical Committee on Computer Communications (TCCC).

Wireless Telecom Network Planning and Optimization
Friday, October 06, 2006
Wei Yuan

Read More

Hide

Abstract: In this presentation, I would like to share with you wireless network planning and optimization issues that wireless service providers are facing in their daily operations. I will share with you how those issues are resolved currently and how wireless service providers would like those issues to be resolved. I will show that how the issues can be formed into a series of optimization problems and solved using most advanced computer technology, including optimization algorithms you learned from school.

Biography: Wei Yuan is an expert in wireless data networks and operations research with near 10 years of experience in wireless telecom network optimization and system performance. He is currently Vice President of Operations Research of Cerion Inc.

Wei started his telecom career in 1995 at Nortel Networks Wireless Systems Engineering division where he was developing wireless network optimization algorithms to providing cost savings to wireless carries. Wei also work in Sonus Networks where he involved architecture design of Sonus soft-switch distributed architecture and billing system architecture design. Wei worked in Lucrometrics after leaving Sonus. In Lucrometrics he developed an optimization algorithm for increasing telecom network profitability through capital deferment.

Future Impact of Blogs, Podcasting and Other Social Computing Technologies
Thursday, September 28, 2006
Charlene Li

Read More

Hide

Abstract: The lecture will present survey data on the adoption of these technologies and an analysis on how adoption will impact institutions like business, media, education and politics.

Biography: Ms. Li is a graduate of Harvard University, with an M.B.A. from Harvard Business School. Prior to joining Forrester, she served on the board of directors for the Newspaper Association of America's New Media Federation and managed new product development at the San Jose Mercury News.