Invited Speaker




     Prof. Xiuzhen Cheng (Shandong University)

Xiuzhen Cheng received her MS and PhD degrees in computer science from the University of Minnesota -- Twin Cities, in 2000 and 2002, respectively. She was a faculty member at the Department of Computer Science, The George Washington University, Washington DC, from September 2002 to August 2020. Currently she is a professor in School of Computer Science and Technology, Shandong University, China. Her research focuses on blockchain computing, intelligent Internet of Things, and wireless and mobile security. She is the founder and steering committee chair of the International Conference on Wireless Algorithms, Systems, and Applications (WASA, launched in 2006), and the founding Editor-in-Chief of the Elsevier High-Confidence Computing Journal.  She served/is serving on the editorial boards of several technical journals (e.g. IEEE Transactions on Computers, IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Wireless Communications, IEEE Wireless Communications Magazine) and the technical program committees of many professional conferences/workshops (e.g. ACM Mobihoc, ACM Mobisys, IEEE INFOCOM, IEEE ICDCS, IEEE ICC, IEEE/ACM IWQoS). She also chaired several international conferences (e.g. ACM Mobihoc'14, IEEE PAC'18). Xiuzhen is a Fellow of IEEE.  

Title - High-Confidence Computing and High-Confidence Internet of Things

Abstract - High-Confidence Computing (HCC) is a new computing paradigm that can enable the integrations of secure computing, precise computing, and intelligent computing. HCC relies on interdisciplinary methodologies to realize secure and trusted software/hardware, precise and process-traceable algorithms, and self-evolving designs that can adapt to new environments and support new applications. Internet of Things (IoT) systems possessing HCC properties can provide collaborative services that are otherwise impossible as security, traceability, accountability, reliability, robustness, extensibility, adaptivity, and self-evolution are all desirable and equally-important properties of modern connected systems such as smart cities. This talk intends to answer the following questions: what is high-confidence computing, why high-confidence computing is needed by IoT systems, and how to realize high-confidence IoT. We will introduce an architecture to demonstrate our exploratory studies regarding how to integrate state-of-the-art techniques to build a high-confidence IoT. We also will present our own effort towards realizing high-confidence IoT.



     Dr. Vadim Lyubashevsky  (IBM Research Europe in Zurich)

Vadim Lyubashevsky is a principal research scientist in the security group at IBM Research Europe in Zurich. He received his Ph.D. from the University of California San Diego in 2008, and then held positions as a post-doc at Tel Aviv University, and as a researcher at Inria in Paris. He was a recipient of a starting European Research Council (ERC) research grant on constructions of practical lattice-based encryption and digital signatures, and is currently a holder of a Consolidator ERC grant focused on constructing next-generation lattice-based zero-knowledge protocols. He is actively involved in the NIST PQC standardization process, being a part of consortia for three submissions, all of which are among the finalists.

Title - Lattice Cryptography and PQC Standardization

Abstract - Barring any cryptanalytic breakthroughs, cryptography based on the hardness lattice problems is on track to become the main replacement to traditional cryptography in the post-quantum world.
Lattice-based schemes have fairly short keys and outputs, and enjoy excellent efficiency, often being substantially faster than their classical counterparts.  In this talk I will give a brief overview of, and comparison between, the different lattice-based KEMs and signature schemes in the final round of the NIST standardization process that began in 2017. I will explain what makes lattice-based schemes so efficient and give some personal opinions about how, given what we've learned during the last 4 years of the standardization process, the current proposals can be improved.