5th International Workshop on Artificial Intelligence and Industrial Internet-of-Things Security (AIoTS)
Workshop Program, June 21, 2023: 9:00 AM - 12:30 PM JST
- Workshop Opening: 9:00 - 9:10 AM [All times are JST]
Keynote 1:
Session Chair: Dr. Ashiqur Rahman
- 9:10 - 10:00 AM Speaker Name: Prof. Bo Li
- Break: 10:00 - 10:30
Affiliation: UIUC
Title: Certifiably Robust Learning via Knowledge-Enabled Logical Reasoning.
Paper Session:
Session Chair: TBD
- 10:30 - 10:50 (1) Blockchain-enabled Data Sharing in Connected Autonomous Vehicles for Heterogeneous Networks.
- 10:50 - 11:10 (2) EARIC: Exploiting ADC Registers in IoT and Control Systems.
- 11:10 - 11:30 (3) A Security Policy Engine for Building Energy Management Systems.
- Break: 11:30 - 11:40
Authors- Ali Hussain Khan, Naveed Ul Hassan, Zartash Afzal Uzmi, Chuadhry Mujeeb Ahmed and Chau Yuen
Authors- Eyasu Getahun Chekole, Rajaram Thulasiraman and Jianying Zhou.
Authors- Jiahui Lim, Wenshei Ong, Utku Tefek and Ertem Esiner
Keynote 2:
Session Chair: Dr. Daisuke Mashima
- 11:40 - 12:30 PM Speaker Name: Prof. Jianying Zhou
- Workshop Closing: 12:30 PM
Affiliation: SUTD
Title: Maritime Cybersecurity: Challenges, Guidelines and Testbeds.
Important Dates
- Paper Submission Deadline:
March 20, 2023March 27, 2023 [Extended] - Notification of Acceptance: April 19, 2023
- Submission of camera-ready papers for pre-proceedings: May 1, 2023
- Workshop Date: June 21, 2023
Keynote Speakers
Abstract: The ubiquity of intelligent systems underscores the paramount importance of ensuring their trustworthiness. Traditional machine learning approaches often assume that training and test data follow similar distributions, neglecting the possibility of adversaries manipulating either distribution or natural distribution shifts, which can lead to severe trustworthiness issues in machine learning. Our previous research has demonstrated that motivated adversaries can circumvent anomaly detection or other machine learning models at test-time through evasion attacks, or inject malicious instances into training data to induce errors through poisoning attacks. In this talk, I will provide a succinct overview of our research on trustworthy machine learning, including robustness, privacy, generalization, and their underlying interconnections, with a focus on robustness. Specifically, I will first discuss the current state of the art in certifiably robust defenses based on purely data-driven models and demonstrate that they have reached a bottleneck. I will then present our recent research on certifiably robust learning via knowledge-enabled logical reasoning, showing that it is possible to: 1) certify the robustness of such an end-to-end framework and significantly improve the certified robustness on large-scale datasets, 2) prove that such a framework is more robust than a single data-driven model under mild conditions, and 3) scale it for a variety of downstream tasks such as image classification, information extraction, PDF malware classification, and data generation.
Bio: Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, AI’s 10 to Watch, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Meta, Google, Intel, IBM, and eBay, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, which is at the intersection of machine learning, security, privacy, and game theory. She has designed several scalable frameworks for certifiably robust learning and privacy-preserving data publishing. Her work has been featured by several major publications and media outlets, including Nature, Wired, Fortune, and New York Times.
- Speaker 2: Jianying Zhou (SUTD)
- Title: Maritime Cybersecurity: Challenges, Guidelines and Testbeds
Abstract: Critical infrastructure becomes a strategic target in the midst of a cyber war. Maritime is a key sector in critical infrastructure. In this talk, I will first introduce the efforts taken by iTrust in protecting critical infrastructure. Then I will focus on our recent work in the maritime sector, especially the new guidelines being developed for cyber risk management of shipboard OT systems, and the new maritime testbed being built to fill the gap between the urgent need to develop and deploy cybersecurity technologies for maritime sector and the lack of a safe and realistic environment for testing and validating new cybersecurity technologies before being adopted.
Bio: Jianying Zhou is a professor and center director (designate) for iTrust at Singapore University of Technology and Design (SUTD). He received PhD in Information Security from Royal Holloway, University of London. His research interests are in applied cryptography and network security, cyber-physical system security, mobile and wireless security. He is a co-founder & steering committee co-chair of ACNS. He is also steering committee chair of ACM AsiaCCS, and steering committee member of Asiacrypt. He is an ACM Distinguished Member. He received the ESORICS Outstanding Contribution Award in 2020, in recognition of contributions to the community.
- Formal security and resilience analysis on AI and IIoT/CPS
- Risk management and governancefor AI and IIoT-based Application
- AI-Assisted Critical Infrastructure Security
- (Federated) Adversarial Machine Learning
- AI for Detection,Prevention, Response and Recovery against PotentialThreats
- AI for Wide-Area Situational Awareness and Traceability
- Applied Cryptography for AI and IIoT
- Security and Privacy of Cyber-Physical Systems and/or IIoT
- Applications of Formal Methods to IIoT Security
- Blockchain for TrustworthyIIoT/CPS-based applications
- Embedded Systems Security
- Cyber Threat Intelligence for AI and IIoT/CPS
- Privacy-Preserving Machine Learning
- Mohammad Ashiqur Rahman, Florida International University, USA
- Daisuke Mashima, ADSC & National University of Singapore, Singapore
- Sridhar Adepu, University of Bristol, UK
- Kazuhiro Minami, Institute of Statistical Mathematics, Japan
- Nur Imtiazul Haque, Florida International University
- Chuadhry Mujeeb Ahmed, University of Strathclyde, UK
- Magnus Almgren, Chalmers University
- John Castellanos, CISPA Helmholtz Center for Information Security
- Luca Davoli, University of Parma
- Carl Dickinson, Newcastle University
- Amrita Ghosal, University of Limerick
- Joseph Gardiner, University of Bristol
- Luis Garcia, University of Southern California
- Vasileios Gkioulos, Norwegian University of Science and Technology
- Sheikh Rabiul Islam, University of Hartford
- Jorjeta Jetcheva, San Jose State University
- Charalambos Konstantinou, KAUST
- Marina Krotofil, Maersk
- Xin Lou, Singapore Institute of Technology
- Subhash Lakshminarayana, University of Warwick
- Rajib Ranjan Maiti, Birla Institute of Technology
- Weizhi Meng, Technical Universtiy of Denmark
- Venkata Reddy Palleti, Indian Institute of Petroleum & Energy
- Neetesh Saxena, Cardiff University, UK
- Biplab Sikdar, National University of Singapore
- Giedre Sabaliauskaite, Swansea University
- Utku Tefek, Advanced Digital Sciences Center
- Alma Oracevic, University of Bristol
- Zheng Yang, Southwest University
- Katsunari Yoshioka, Yokohama National University
- Pengfei Zhou, University of Pittsburgh
Workshop Description
In recent years, Artificial Intelligence (AI) has received a lot of attention, especially for the success of deep learning in addressing problems that were considered hard before. Big players, such as Google, Amazon, and Baidu, are exploring the application of AI in different markets, including healthcare, FinTech, and autonomous vehicles. Together with AI, technologies like Internet-of-Things (IoT) have boosted the emerging Industry 4.0, where through the adoption of Industrial-IoT (IIoT) into the production chain, companies want smarter manufacturing that can be adapted to their customers’ needs.
The accelerating adoption of new technologies brings challenges primarily associated with the cybersecurity of the applications, where confidentiality, integrity, and data availability are crucial. Security incident in IIoT impacts the safety properties since applications interact physically with people or other assets. The intersection of AI and cybersecurity can be seen as a two-fold relationship. On the one hand, AI techniques can be adopted to improve state-of-the-art security solutions. On the other hand, cybersecurity can contribute to improving the security of AI algorithms through the exploration of adversarial machine learning.
This workshop aims to open a space where new research ideas from different areas converge into the intersection of AI, IIoT, Cyber-Physical Systems (CPS), and cybersecurity. We encourage researchers and experts in the fields of AI, embedded systems, CPS, and cybersecurity to take the opportunity to use this workshop to share their work and open the discussion of new ideas on this always-evolving topic.
Topics of Interest
AIoTS aims to cover various fields of application in the area of security and privacy within the fields of artificial intelligence and industrial IoT. Thus, suggested topics include, but are not limited to, the following points:
Paper Submission
Instructions for authors:
Papers can be submitted at the Easy Chair link here.
Submissions must be original and must not duplicate work that any of the authors has published elsewhere or has submitted in parallel to any other venue with formally published proceedings.
Submissions must be anonymous, with no author names, affiliations, acknowledgement or obvious references. Each submission must begin with a title, short abstract, and a list of keywords. The introduction should summarise the contributions of the paper at a level appropriate for a non-specialist reader. Likewise, all submissions must follow the original LNCS format (see http://www.springeronline.com/lncs ) with a page limit of 20 pages (including references). It is strongly encouraged that submissions be processed in LaTeX.
Authors of accepted papers must guarantee that their paper will be presented at the conference and must make a full version of their paper available online. Therefore, each accepted paper must be presented by at least a registered author. Submissions not meeting the submission guidelines risk rejection without consideration of their merits.
The accepted papers will have post-proceedings published by Springer in the LNCS series.
Best Paper Award
There will be an ACNS best workshop paper award (with 500 EUR prize sponsored by Springer), to be selected from the accepted papers of all workshops.