Program of 2019 Huawei Forum on Future Securityccs.research.utar.edu.my/ispec2019/wp-content/... ·...
Transcript of Program of 2019 Huawei Forum on Future Securityccs.research.utar.edu.my/ispec2019/wp-content/... ·...
Program of 2019 Huawei Forum on
Future Security
25 Nov 2019, Kuala Lumpur, Malaysia
Organized by Huawei International
Contact Information
Dr. Cheng Kang Chu: [email protected], +65 9856 6271 (M) Dr. Hsiao Ying Lin: [email protected] +65 9155 7630(M)
Venue
Grand Salon 1, Level 1, Grand Hyatt Kuala Lumpur
12, Jalan Pinang, Kuala Lumpur, 50450 Kuala Lumpur, Wilayah
Persekutuan Kuala Lumpur
Location Map for the Hotel of Grand Hyatt @ Kuala Lumpur
Monday, 25 Nov 2019 08:30-9:00 Registration
09:00-09:10 Opening Introduction
Session 1: AI Security and Privacy
Chair: Dr. Tieyan Li, Huawei
09:10-09:50 Machine Learning Security: Adversarial Attacks and Transferability
Prof. Battista Biggio, University of Cagliari, Italy
09:50-10:30 Towards Improving Accuracy in Differentially Private Deep Learning
Prof. Xiaokui Xiao, NUS, Singapore
10:30-11:00 Coffee Break
11:00-11:40 Federating Under Privacy Constraints: When Federated Learning and
Cryptography Meet
Prof. David Naccache, ENS, France
11:40-12:20 The Problem of Explaining Deep Neural Networks
Prof. Jun Sun, SMU, Singapore
12:20-14:00 Buffet Lunch @ Sky Lunch, Grand Hyatt
Session 2: Cryptography and System Security
Chair: Dr. Guilin Wang, Huawei
14:00-14:40 Secure and Verifiable Computation
Prof. Huaxiong Wang, NTU, Singapore
14:40-15:20 Privacy preserving computations: state of the art and new challenges in
Homomorphic Encryption and Multiparty Computations
Prof. Nicolas Gama, Inpher, Inc., Switzerland
15:20-16:00 Coffee Break
16:00-16:40 Finding Implementation Flaws in Password and OTP Authentication Code in
Android Apps
Prof. Robert Deng, SMU, Singapore
16:40-17:20 Scaling up Binary Analysis via Knowledge-oriented Techniques
Prof. Zhenkai Liang, NUS, Singapore
Keynote Talk 1
Machine Learning Security: Adversarial Attacks and Transferabilty
Prof. Battista Biggio, University of Cagliari, Italy
Abstract: It has been shown that data-driven AI and machine learning suffer from hallucinations known as adversarial examples, i.e., imperceptible, adversarial perturbations to images, text and audio that fool these systems into perceiving things that are not there. This phenomenon is even more evident in the context of cybersecurity domains like malware and spam detection, in which data is purposely manipulated by cybercriminals to undermine the outcome of automatic analyses. These attacks are crafted in a white-box setting, but it has been shown that they can also transfer to different models for which only black-box query access is provided to the attacker.
In this talk, I review different adversarial attacks that may undermine machine learning security, including test-time evasion and training-time poisoning attacks, and shed light on the transferability properties of such attacks. I conclude by discussing some promising defense mechanisms against both attacks in the context of real-world applications, including computer vision, biometric identity recognition and computer security.
Bio: Battista Biggio (MSc ’06, PhD ‘10) is an Assistant Professor at the Department of Electrical and Electronic Engineering at the University of Cagliari, Italy, and a co-founder of Pluribus One, a startup company developing secure AI algorithms for cybersecurity tasks. In 2011, he visited the University of Tuebingen, Germany. His pioneering research on adversarial machine learning involved the development of secure learning algorithms for spam and malware detection, and computer-vision problems, playing a leading role in the establishment and advancement of this research field. On these topics, he has published more than 70 papers, collecting more than 3500 citations (Google Scholar, October 2019). Dr. Biggio regularly serves as a reviewer and program committee member for several international conferences and journals on the aforementioned research topics (including ICML, NeurIPS, IEEE Symp. S&P and ACM CCS), co- organizes three well-established workshops (AISec, DLS, S+SSPR) and he is Associate Editor of three high-impact journals (Pattern Recognition, IEEE TNNLS , and IEEE Comp. Intell. Magazine). He is chair of the TC1 on Statistical Pattern Recognition of the IAPR, a senior member of the IEEE and a member of the IAPR and ACM.
Keynote Talk 2
Towards Improving Accuracy in Differentially Private Deep
Learning
Prof. Xiaokui Xiao, National University of Singapore, Singapore
Abstract: In recent years, differential privacy has become a well-accepted standard for privacy
protection, and deep neural networks have been immensely successful in machine learning.
Combining the two is a promising idea, since it could enable the release of accurate models built
on sensitive data such as medical records. However, deep learning models are rather resistant to
differential privacy, due to the former's vast number of parameters and complex training
process based on stochastic gradient descent (SGD). Consequently, straightforward solutions
would require overwhelming noise to satisfy differential privacy, leaving hardly any utility in the
resulting model. The current state-of-the-art solutions achieve decent accuracy based on a
sophisticated privacy loss analysis, as well as other optimizations such as dynamic privacy
budget allocation. Even so, their accuracy is still rather low compared to the non-private setting,
due to the large amount of noise injected to each gradient value in SGD. In this talk, we will
present a novel solution for differentially private deep neural network training that achieves
significant and consistent accuracy gains over existing methods. The main idea is to exploit the
observation that nearby iterations in SGD often yield similar gradients; in this situation, the
differentials between these gradients can be published more precisely than the gradients
themselves, under the same privacy requirement. Extensive experiments on two classic
benchmark datasets demonstrate the high effectiveness of our solution and its superiority over
existing methods.
Bio: Xiaokui Xiao is an associate professor at the School of Computing (SoC), National University
of Singapore (NUS). Prior to joining NUS, he was an associate professor at the School of
Computer Science and Engineering, Nanyang Technological University. Xiaokui’s research
focuses on data management and analytics, especially on algorithms for large data, data privacy,
and data mining. He has published extensively in the leading data management conference and
journals, and is serving as associate editors for the International Journal on Very Large Data
Bases (VLDBJ) and the IEEE Transactions on Knowledge and Data Engineering (TKDE).
Keynote Talk 3
Federating Under Privacy Constraints: When Federated Learning and Cryptography Meet
Prof. David Naccache, ENS Paris, France
Abstract: Federated learning consists in aggregating several machine learning tasks into a bigger, global learning endeavor. The aggregation of tasks concerns either the learning phase (federated learning) or the testing phase (federated testing). Typically, federated learning allows several network users to jointly train or interrogate a global model while each user keeps its local dataset private. Federating has several advantages, amongst which the most important are efficiency i.e. the ability to distribute (parallelize) learning over several machines and security (because confidential information is not stored or processed at a unique master node). This talk will overview the security challenges posed by federated learning and also investigate how cryptography (notably homomorphic encryption, multiparty computation and oblivious-transfer-based techniques) may add privacy to federated learning.
Bio: Prof. David Naccache heads the ENS Information Security Group. His research areas are
code security, forensics, the automated and the manual detection of vulnerabilities. Before
joining ENS Paris (PSL) he was a professor during 10 years at UP2 (Sorbonne Universités). He
previously worked for 15 years for Gemplus (now Gemalto), Philips (now Oberthur) and
Thomson (now Technicolor). He studied at UP13 (BSc), UP6 (MSc), IMAC (Eng), TPT (PhD), UP7
(HDR), IHEDN and ICP (STB underway). He is a forensic expert by several courts, a member of
OSCP and the incumbent of the Law and IT forensics chair at EOGN.
Keynote Talk 4
The Problem of Explaining Deep Neural Networks
Prof. Jun Sun, Singapore Management University, Singapore
Abstract: We still do not understand how/why deep neural networks (DNN) work. Interpretability is however crucially relevant when it comes to safety and security. Many attempts have been made in recent years to explain trained DNN models. In this talk, I will discuss some of them, including one approach from our group. Bio: SUN, Jun is currently an associate professor at Singapore Management University (SMU). He received Bachelor and PhD degrees in computing science from National University of Singapore (NUS) in 2002 and 2006. In 2007, he received the prestigious LEE KUAN YEW postdoctoral fellowship. He has been a faculty member of SUTD and then SMU since 2010 and was a visiting scholar at MIT from 2011-2012. Jun's research interests include software engineering, cyber-security and formal methods.
Keynote Talk 5
Secure and Verifiable Computation
Prof. Huaxiong Wang, Nanyang Technological University, Singapore
Abstract: Outsourcing computation has gained significant popularity in recent years due to the
prevalence of cloud computing. How to keep the confidentiality of the client's data and how to
ensure the correctness of the server's computation are two fundamental problems to achieve.
Verifiable computation, introduced by Gennaro, Gentry and Parno in 2010, allows to delegate the
computation of a function f on outsourced data x to third parties, such that the data owner and/or
other third parties can verify that the outcome y = f(x) has been computed correctly by the third
party. Constructing efficient verifiable computation schemes has attracted a lot of attention
during the past decade. In this talk, we will present a brief overview of the state-of-the-art and
discuss a new (multi-server) model for verifiable computation, which allows unconditional
security, practical efficiency, and public delegation.
Bio: Huaxiong Wang received a PhD in Mathematics from University of Haifa, Israel in 1996 and a PhD in Computer Science from University of Wollongong, Australia in 2001. He has been with Nanyang Technological University (NTU) in Singapore since 2006, where he also served as the Head of Division of Mathematical Sciences from 2013 to 2015. He is currently the Deputy Director of Strategic Centre for Research in Privacy-Preserving Technologies & Systems (SCRIPTS) at NTU. He has more than 20 year experience of research in cryptography and information security. He is author/co-author of 1 book, 9 edited books and over 200 papers in international journals and conferences, covering various areas in cryptography and information security. He has supervised over 25 PhD students, and has served on the editorial board of several international journals and as a member/chair of the program committee for more than 100 international conferences. He received the inaugural Award of Best Research Contribution awarded by the Computer Science Association of Australasia in 2004. He was awarded the Minjiang Scholar in 2013 by Fujian Province, China. He was the invited speaker of ASIACRYPT 2017, and will serve as the program Co-Chair of Asiacrypt 2020 and 2021.
Keynote Talk 6
Privacy Preserving Computations: State of the art and new challenges in Homomorphic Encryption and Multiparty Computations
Prof. Nicolas Gama, Inpher Inc, Switzerland
Abstract: In this talk, we recall the state of the science in the domains of privacy-preserving computations for the cloud: namely fully homomorphic encryption (FHE) and also multiparty computations (MPC). We give the current trends as well as the major evolutions, in terms of computation models and the security assumptions. Finally, we present some current challenges that require to compute over secret data, and demonstrate a few technologies that can tackle these kind of real world applications.
Bio: Nicolas Gama is currently chief computer scientist at Inpher Inc, and also associate
professor at Université de Versailles. He is responsible for the research and development
of new security and cryptographic software for performing private computations in a
public cloud environment. He studied Foundations of Computer Sciences at the Ecole
Normale Supérieure Paris, and specialized himself in the domains of Algorithmic and
Cryptology. He made his PhD on Geometry of numbers and its applications to
Cryptology, and graduated in 2008. He joined the Crypto team of Versailles in 2010 as a
Associate Professor, and joined Inpher as Chief Computer Scientist in 2015. His area of
research include Cryptology, Cloud Security, Fully homomorphic encryption, multiparty-
computations, and distributed computing. He is co-author of at least 30 publications in
international journal/conferences in this domain. He co-authored of the best paper at
Asiacrypt 2016 on fully homomorphic encryption, which initiated the developpement of
the open-source TFHE library. At Inpher, he also works on the XOR Secret computing
engine, whose goal is to provide efficient privacy preserving solutions across multiple
jurisdiction.
Keynote Talk 7
Finding Implementation Flaws in Password and OTP Authentication Code in Android Apps
Prof. Robert Deng, Singapore Management University, Singapore
Abstract: Password and One Time Password (OTP) are widely used to validate users’ identities in computer systems because they are convenient to use and “simple” to implement. However, we find that even simple password and OTP authentication protocols are often implemented incorrectly. We develop GLACIATE and AUTH-EYE, respectively, to study the extend and types of implementation flaws in password and OTP authentication code in Android apps. GLACIATE automatically and accurately learns the common password authentication implementation flaws from a relatively small training dataset, and then identifies whether the flaws exist in other apps. We collected 16,387 apps from Google Play for evaluation. GLACIATE successfully identified 4,105 of them with incorrect password authentication implementations. AUTH-EYE is an automatic analysis tool which checks whether an OTP authentication implementation violates any of a pre-defined set of security rules. AUTH-EYE analysed 3,303 popular Android apps and found that 544 of them adopt SMS OTP authentication, and 536 of the 544 apps violate at least one of the security rules.
Bio: Robert Deng is AXA Chair Professor of Cybersecurity and Director of the Secure
Mobile Centre, School of Information Systems, Singapore Management University
(SMU). His research interests are in the areas of data security and privacy, network
security, and system security. He received the Outstanding University Researcher
Award from National University of Singapore, Lee Kuan Yew Fellowship for Research
Excellence from SMU, and Asia-Pacific Information Security Leadership Achievements
Community Service Star from International Information Systems Security Certification
Consortium. He serves/served on many editorial boards and conference committees.
These include the editorial boards of IEEE Security & Privacy Magazine, IEEE
Transactions on Dependable and Secure Computing, IEEE Transactions on Information
Forensics and Security, Journal of Computer Science and Technology, and Steering
Committee Chair of the ACM Asia Conference on Computer and Communications
Security. He is a Fellow of IEEE and Fellow of Academy of Engineering Singapore.
Keynote Talk 8
Scaling up Binary Analysis via Knowledge-oriented Techniques
Prof. Zhenkai Liang, National University of Singapore, Singapore
Abstract: Binary analysis is a fundamental technique in software and system security. It has a wide range of applications, such as vulnerability discovery, attack response, malware analysis, and software testing and debugging. Due to the lack of high-level semantics and complex program behaviors, it is challenging for binary analysis solutions to scale up to large real-world binaries in practice. Existing solutions are often task-driven and bounded by a practical time limit, which hinders comprehensively understanding of binaries. Furthermore, it is difficult to integrate the knowledge generated from different solutions. In this talk, we discuss our research in scaling up binary analysis in a knowledge-oriented manner. We believe knowledge abstraction is the key to scale up binary analysis, where binary analysis solutions generate understandings that can be shared and reused in other solution. Our investigation includes techniques for knowledge extraction, tools for knowledge integration, and platforms for knowledge accumulations and sharing. The accumulated knowledge not only allows broader and deeper analysis into binaries. It also enables emerging data-driven and learning techniques to be effectively adopted in binary analysis solutions.
Bio: Zhenkai Liang is an Associate Professor of the School of Computing, National
University of Singapore. His main research interests are in system and software security,
web security, mobile security, and program analysis. He is also the Co-Lead PI of
National Cybersecurity R&D Lab in Singapore. He has served as the technical program
committee members of many system security conferences, including the ACM
Conference on Computer and Communications Security (CCS), USENIX Security
Symposium and the Network and Distributed System Security Symposium (NDSS), as
well as a member of NDSS Steering Group. As a co-author, he received seven
best/distinguished paper awards, including USENIX Security Symposium, FSE and
ACSAC. He also won the Annual Teaching Excellence Award of National University of
Singapore in 2014 and 2015. He received his Ph.D. degree in Computer Science from
Stony Brook University in 2006, and B.S. degrees in Computer Science and Economics
from Peking University in 1999.