Information technologies today can inform each of us about the best alternatives for shortest paths from origins to destinations, but they do not contain incentives or alternatives that manage the information efficiently to get collective benefits. To obtain such benefits, we need to have not only good estimates of how the traffic is formed but also to have target strategies to reduce enough vehicles from the best possible roads in a feasible way.
Moreover, to reach the target vehicle reduction is not trivial, it requires individual sacrifices such as some drivers taking alternative routes, shifts in departure times or even changes in modes of transportation. The opportunity is that during large events (Carnivals, Festivals, Sports events, etc.) the traffic inconveniences in large cities are unusually high, yet temporary, and the entire population may be more willing to adopt collective recommendations for social good.
This project focuses on understanding the impact of large-scale events and city growth to the traffic in the city and people’s commuting, and sequentially proposing reasonable and feasible travel demand management strategy to mitigate the traffic congestion in the future. This project takes a fast growing city, Doha, Qatar as testbed. Today’s traffic in Doha is notoriously bad and the population is growing very fast. Doha will host the FIFA World Cup in 2022, which will definitely attract a great number of tourists, and increase the pressure of the road network. To meet these challenges, we use big data resources to understand the impact of World Cup and assist the policy maker with more reasonable planning strategy.
Currently, we have estimated the travel demand of local population using bluetooth data and census data. The demand are assigned to the road network and the travel time of each trip can be estimated.
This project falls into three categories: 1) the use of machine learning and other advanced analytical techniques to discover new information related to on-field performance, and 2) the development and application of novel techniques that provide new ways of viewing sporting events, and 3) providing a system for content-adaptive video retargeting.
"Riesz pyramid for fast phase-based video magnification," N. Wadhwa, M. Rubinstein, F. Durand, and W. T. Freeman. In Computational Photography (ICCP), 2014 IEEE International Conference on Computational Photography. IEEE, 2014.
“Automatically Recognizing On-Ball Screens,” A. McQueen, J. Wiens, and J. Guttag, Sloan Sports Analytics Conference, March 2014.
“A Data-driven Method for In-game Decision Making in MLB,” G. Ganeshpapillai and J. Guttag, KDD 2013, Sloan Sports Analytics Conference, March 2014.
“Modeling and Optimizing Eye Vergence Response to Stereoscopic Cuts”, Krzysztof Templin, Piotr Didyk, Karol Myszkowski, Mohamed M. Hefeeda, Hans-Peter Seidel, Wojciech Matusik, ACM Transactions on Graphics (ACM Siggraph) 2014, accepted.
“Anahita: A System for 3D Video Streaming with Depth Customization”, Kiana Calagari, Krzysztof Templin, Tarek Elgamal, Khaled Diab, Piotr Didyk, Wojciech Matusik, Mohamed Hefeeda, ACM Multimedia 2014.
"Player Motion Analysis: Automatically Classifying NBA Plays," Mitchell Kates, M.Eng. Thesis, MIT, September 2014.
Mohamed A. Elgharib, Mohamed Hefeeda, Frédo Durand, William T. Freeman Video Magnification in Presence of Large Motions IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2015.
"Visual Vibrometry: Estimating Material Properties From Small Motion in Video," Abe Davis, Katherine L. Bouman, Justin G. Chen, Michael Rubinstein, Fredo Durand, William T. Freeman; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5335-5343.
"Image-Space Modal Bases for Plausible Manipulation of Objects in Video," Abe Davis, Justin Chen, Fredo Durand, accepted to ACM Siggraph Asia.
"Efficient Cloud Photo Enhancement using Compact Transform Recipes, " Michael, Sylvain, Fredo , Yichang , Jonathan Ragan-Kelley, Gaurav Chaurasia, accepted to ACM Siggraph Asia.
"Deviation Magnification: Revealing Geometric Deviation in a Single Image, "Tali Dekel, Neal Wadhwa, Donglai Chen, Fredo Durand, William Freeman, accepted to ACM Siggraph Asia.
This project focuses on how data management can be used to facilitate social computing. The Humanitarian Technologies research thrust seeks to establish key technologies required to facilitate disaster management and humanitarian relief activities based on social media. These technologies leverage current social networks and primarily focus on data consumption, generation, and integration.
Lalana Kagal, CSAIL
Carlos Castillo, QCRI
Patrick Meier, QCRI
“Intelligent Exploration of the Linked Data Cloud”, a paper on the Linked Data exploration tool, submitted to the International Conference on Intelligent User Interface (IUI) in October 2013.
Democratizing Mobile App Development for Disaster Management Shih, Fuming, Seneviratne, Oshani, Miao, Daniela, Liccardi, Ilaria, Kagal, Lalana, Patton, Evan, Meier, Patrick, and Castillo, Carlos IJCAI Workshop on Semantic Cities, 2013
"Molding the Web of Data – An Architecture for Mobile Linked Data," Oshani Seneviratne, Evan W. Patton, Daniela Miao, Fuming Shih, Weihua Li, Lalana Kagal, and Carlos Castillo, ISWC October 2014
"Developing Mobile Linked Data Applications" Oshani Seneviratne, Evan W. Patton, Daniela Miao, Fuming Shih, Weihua Li, Lalana Kagal, and Carlos Castillo, International Semantic Web Conference (ISWC), October 2014.
"CIMBA - Client-Integrated MicroBlogging Architecture," Andrei Vlad Sambra, Sandro Hawke, Tim Berners-Lee, Lalana Kagal, and Ashraf Aboulnaga, International Semantic Web Conference (ISWC), October 2014.
“Mobile App Development for Crisis Data”, Anubhav Jain, Julius Adebayo, Eduardo De Leon, Weihua Li, Lalana Kagal, Carlos Castillo and Patrick Meier. Elsevier proceedings of the Humanitarian Technologies Conference 2015.
“The Role of Mobile Technologies in Humanitarian Relief” at Information Systems for Crisis Response and Management (ISCRAM) 2015
The research challenge we address is that of securing computing infrastructure against a broad class of cyberattacks. Our objective is to develop new techniques that can remove many of the vulnerabilities that attackers exploit and that can predict and intercept new (zero-day) attacks that exploit previously unknown vulnerabilities. These objectives are realized through a number of sub-projects described in the proposal that fall into three categories: Systems that are much more difficult to penetrate; Systems that can work through penetrations; and Systems that can recover quickly.
This project was launched a year ago, getting started in full around the beginning of October 2013. We have been ramping up the activities since then and are now fully engaged. Although spending is now on track with the original plan, the ramp-up period has led to some under-spending for the year as a whole.
Srini Devadas, CSAIL
Adam Chlipala, CSAIL
Frans Kaashoek, CSAIL
Shafi Goldwasser, CSAIL
Howard Shrobe, CSAIL
Martin Rinard, CSAIL
Armando Solar Lezama, CSAIL
Vinod Vaikuntanathan, CSAIL
Nickolai Zeldovich, CSAIL
Dimitrios Serpanos, QCRI
X. Yu, C. Fletcher, L. Ren, M. Van Dijk, and S. Devadas, "Generalized External Interaction with Tamper-Resistant Hardware with Bounded Information Leakage", Proceedings of the Cloud Computing Security Workshop (CCSW), November 2013.
"Multi-Input Functional Encryption" Shafi Goldwasser, S.DovGordon, Vipul Goyal, Abhishek Jain, Jonathan Katz, Feng-Hao Liu, Amit Sahai, Elaine Shi and Hong-Sheng Zhou, Appeared in Proceedings of 33rd Annual International Conference on the Theory and Applications of Cryptographic Techniques (Eurocrypt 2014), Copenhagen, Denmark, May 11-15, 2014
"Suppressing the Oblivious RAM Timing Channel While Making Information Leakage and Program Efficiency Trade-offs, C. Fletcher, L. Ren, X. Yu, O. Khan, M. Van Dijk, and S. Devadas, Proceedings of the 20th Int'l Symposium on High Performance Computer Architecture, February 2014.
"Tiny Path ORAM: A Low-Latency, Low-Area Hardware ORAM Controller with Integrity Verification,"Christopher W. Fletcher and Ling Ren and Albert Kwon and Marten Van Dijk and Emil Stefanov and Srinivas Devadas.
“On the Behavioral Formalization of the Cognitive Middleware AWDRAT” by : Muhammad Taimoor Khan, Dimitrios Serpanos and Howard Shrobe, NWPT 2014 Workshop.
"Freecursive ORAM: [Nearly] Free Recursion and Integrity Verification for Position-based Oblivious RAM"Christopher W. Fletcher, Ling Ren, Albert Kwon, Marten Van Dijk, Srinivas Devadas, ASPLOS 2015 (MARCH 2015).
"Trapdoor Computational Fuzzy Extractors"Charles Herder, Ling Ren, Marten van Dijk, Meng-Day (Mandel) Yu , Srinivas Devadas, submitted to IEEE Security and Privacy, May 2015
Benjamin Delaware, Clément Pit--Claudel, Jason Gross, Adam Chlipala. Fiat: Deductive Synthesis of Abstract Data Types in a Proof Assistant. Proceedings of the 42nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'15). January 2015.
Clément Pit--Claudel, Peng Wang, Jason Gross, Benjamin Delaware, Adam Chlipala. "Correct-by- Construction Program Derivation from Specifications to Assembly Language."
M.T. Khan, D. Serpanos and H. Shrobe, “On the Behavioral Formalization of the Cognitive Middleware AWDRAT”, NWPT’2014 Workshop
M.T. Khan, D. Serpanos and H. Shrobe, “On the Formal Semantics of the Cognitive Middleware AWDRAT”, MIT Report
"Leveled Fully Homomorphic Signatures from Standard Lattices", Sergey Gorbunov, Vinod Vaikuntanathan and Daniel Wichs STOC 2015.
"Indistinguishability Obfuscation of Iterated Circuits and RAM Programs Ran Canetti, Justin Holmgren", Abhishek Jain and Vinod Vaikuntanathan STOC 2015.
We propose a new study type to understand the basis of complex genetic traits, a functional genome-wide association study (fGWAS). Most current experimental designs, relying solely on linear models and genetic information to predict phenotypes, fail to recover the full range of predictability of a trait. By combining extensive well-controlled cellular data with novel integrative computational models, we seek to find a large chunk of the missing heritability of multiple complex traits. With these contributions, we will capture the broad-sense heritability that is missed by linear models that rely solely on genotype and markers acting individually.
Our new study type will make advances along two fronts by measuring and integrating fine-grained cellular measurements into genotype-phenotype models:
(1) Integrative models that will use cellular measurements to prioritize particular genetic variants and interactions, leading to more effective multiple hypothesis controls and better predictions
(2) Cellular measurements, interpreted as biomarkers, will be used directly to improve prediction of phenotypes
Key milestones include (1) developing novel computational methods that link genotype to phenotype using functional information in Functional Genome Wide Association Studies, and (2) characterizing natural human genetic variation using new computational methods.
David Gifford, CSAIL
Tommi Jaakkola, CSAIL
Halima Bensmail, QCRI
Reda Rawi, QCRI
Interactions between chromosomal and nonchromosomal elements reveal missing heritability, Edwards MD, Symbor-Nagrabska A, Dollard L, Gifford DK, Fink GR . Proc Natl Acad Sci U S A. 2014 May 27;111(21):7719-22.
Research Objectives and Milestones Summary
Problem: How is memory implemented in the human brain?
Innovative approach: Development of machine learning classification algorithms for human neuroscience data
Expected outcomes: Knowledge of the computations and brain regions associated with visual long term memory
Aude Oliva, CSAIL
Polina Golland, CSAIL
Halima Bensmail, QCRI
Othmane Bouhali, QCRI
The goal of the project is to design a high-throughput and low-power FPGA implementation of the newly proposed sparse FFT algorithm. For the purposes of guiding the implementation
effort, we had chosen the input data size as a million (220) points, with a maximum of 500 nonzero frequency coefficients. We had completed an initial implementation of the SFFT Core, which includes: 4096 point dense-FFT module, a top-511 element selector module, a Voting module and the Value-compute module. We have been improving the design performance and resource usage by modifying the pipeline of the design. We have also completed an extensive debugging of the design using customized test-benches. Given the filtered input data slices, our FPGAimplementation now produces the value-index pairs of the 500 most significant frequency components.
Raymond Filippi, QCRI
Abhinav Agarwal, Haitham Hassanieh, Omid Abari, Ezz Hamed, Dina Katabi & Arvind, "High-Throughput Implementation of a Million-Point Sparse Fourier Transform," In Proceedings of the International Conference on Field Programmable Logic and applications (FPL), 2014
MAQSA is a system for social analytics on news. MAQSA provides an interactive topic-centric dashboard that summarizes news articles and social activity (e.g., comments and tweets) around them. MAQSA helps editors and publishers in newsrooms understand user engagement and audience sentiment evolution on various topics of interest. It also helps news consumers explore public reaction on articles relevant to a topic and refine their exploration via related entities, topics, articles and tweets. Given a topic, e.g., “Gulf Oil Spill,” or “The Arab Spring”, MAQSA combines three key dimensions: time, geographic location, and topic to generate a detailed activity dashboard around relevant articles. The dashboard contains an annotated comment timeline and a social graph of comments. It utilizes commenters’ locations to build maps of comment sentiment and topics by region of the world. Finally, to facilitate exploration, MAQSA provides listings of related entities, articles, and tweets. It algorithmically processes large collections of articles and tweets, and enables the dynamic specification of topics and dates for exploration. The MAQSA Project completed during Spring 2012, resulting in a patent and conference paper.
Sam Madden, CSAIL
Jorge Quiane Ruiz, QCRI
Sihem Amer-Yahia, QCRI
Sihem Amer-Yahia, Samreen Anjum, Amira Ghenai, Aysha Siddique, Sofiane Abbar, Sam Madden, Adam Marcus, Mohammed El-Haddad: "MAQSA: a system for social analytics on news", SIGMOD Conference 2012: 653-656
The major goal of the project is to understand the food habits from social media images. This includes: training machine learning models for image auto-tagging and content extraction from noisy hashtags; predicting population level health statistics in US and Qatar; monitoring temporal and regional trends in food consumption and its implications; learning models that can achieve in depth analysis of food images through the use of large scale cooking recipe data collected from the web.
We are exploiting big data for image and video manipulation. Our work solves fundamental and challenging computer graphics problems with applications to various impactful domains including: computational photography, multimedia and video content post-production.
Our objective is to answer the question: How can users get the full benefits of multi-user software even when their friends and colleagues use different software vendors, platforms, and service providers? More technically, we aim to design and aid in standardization of protocols which allow for decentralization of social software, thus giving users and vendors a free market for innovation. We also aim to develop software infrastructure that supports this vision, such as servers to support data storage and retrieval, libraries and development tools to support application developers, and web applications for use by end users. Our approach is iterative, building up from small working systems, improving scaling, security, and user experience, as we test and demonstrate new solutions.
We aim to assess the current tactics used by Qataris and other GCC nationals to express identity through the use of virtual identity technologies (e.g., social media profiles and avatars), which are not necessarily designed with their values in mind. This investigation will result in (1) articulation of base principles and best practices for developing technologies that empower Qataris to enact traditional values and cultural norms, (2) new computational techniques for understanding user values and practices in virtual identity systems, and (3) a novel application illustrating the efficacy of our discovered design principles. The key milestone for year 3 is below, with a focus on extending the analysis techniques we developed to more virtual identity platforms and making our design guidelines more available to developers in the region.
Current shared computing platforms, from small clusters to large datacenters, suffer from low utilization, wasting billions of dollars in energy and infrastructure every year. Low utilization stems from a disconnect between layers of the hardware and software stack. The goal of this proposal is to investigate and develop integrated intra- and inter-node resource management techniques that provide both near-peak utilization and guaranteed high performance in shared environments.
To this end, this project consists of three main thrusts:
- Elastic multicore systems, which combine recent hardware support for fast resource management with a novel software runtime to make hardware adaptation work for, not against, performance guarantees. Elastic multicores will use different hardware resources (such as cores, caches, and power) to achieve a given performance target as efficiently as possible, and safely share resources among guaranteed-performance and best-effort applications.
- Novel solutions to enable collaborative multi-tenancy, where resource-intensive workloads are co-scheduled and placed using fine-grained, automatically-collected resource usage profiles, considering aspects such as cache and memory bandwidth sharing.
- A shared system prototype that enables QF computing users to aggressively colocate applications on shared many-core nodes. The system will guarantee the latency requirement of performance-critical tasks (such as Al Jazeera video processing) while achieving high system utilization with intelligent placement of batch tasks such as HPC and data analytics.