Robust, Explainable, and Privacy-Preserving Deep Learning
https://www.journals.elsevier.com/knowledge-based-systems/call-for-papers/robust-explainable-and-privacy-preserving-deep-learning
Aim and Scope
The exponentially growing availability of data such as images, videos and speech from myriad sources, including social media and the Internet of Things, is driving the demand for high-performance data analysis algorithms. Deep learning is currently an extremely active research area in machine learning and pattern recognition. It provides computational models of multiple nonlinear processing neural network layers to learn and represent data with increasing levels of abstraction. Deep neural networks are able to implicitly capture intricate structures of large-scale data and deploy in cloud computing and high-performance computing platforms. The deep learning approach has demonstrated remarkable performances across a range of applications, including computer vision, image classification, face/speech recognition, natural language processing, and medical communications. However, deep neural networks yield ‘black-box’ input-output mappings that can be challenging to explain to users. Especially in the healthcare, cybersecurity, and legal fields, black-box machine learning techniques are unacceptable, since decisions may have a profound impact on peoples’ lives due to the lack of interpretability. In addition, many other open problems and challenges still exist, such as computational and time costs, repeatability of the results, convergence, and the ability to learn from a very small amount of data and to evolve dynamically. Further, despite their enormous societal benefits, deep learning can pose real threats to personal privacy. For example, deep neural networks and other machine learning models are built based on patients' personal and highly sensitive data such as clinical records or tracked health data in the domain of healthcare. Moreover, they can be vulnerable to attackers trying to infer the sensitive data that was used to build the model. This raises important research questions about how to develop deep learning models that protect private data against inference attacks while still being accurate and useful predictive models.
This Special Issue will present robust, explainable, and efficient next-generation deep learning algorithms with data privacy and theoretical guarantees for solving challenging artificial intelligence problems. This Special Issue aims to: 1) improve the understanding and explainability of deep neural networks; 2) improve the accuracy of deep learning leveraging new stochastic optimization and neural architecture search; 3) enhance the mathematical foundation of deep neural networks; 4) design new data privacy mechanisms to optimally tradeoff between utility and privacy; and 5) increase the computational efficiency and stability of the deep learning training process with new algorithms that will scale. Potential topics include but are not limited to the following:
· Novel theoretical insights on the deep neural networks
· Exploration of post-hoc interpretation methods which can shed light on how deep learning models produce a specific prediction and generate a representation
· Investigation of interpretable models which aim to construct self-explanatory models and incorporate interpretability directly into the structure of a deep learning model
· Quantifying or visualizing the interpretability of deep neural networks
· Stability improvement of deep neural network optimization
· Optimization methods for deep learning
· Privacy preserving machine learning (e.g., federated machine learning, learning over encrypted data)
· Novel deep learning approaches in the applications of image/signal processing, business intelligence, games, healthcare, bioinformatics, and security
Important Dates
· Submission Deadline: August 31, 2021
· First Review Decision: September 30, 2021
· Revisions Due: October 31, 2021
· Final Decision: November 30, 2021
· Final Manuscript: December 31, 2021
Dissemination, Composition and Review Procedures
· A Call for Papers (CFP) will be circulated to invite submissions.
· World leading researchers will be invited as authors.
· To further attracting contributors from around the world, the CFP will be advertised across numerous society newsletters, different websites, mailing lists, conferences, associations, and social media groups, etc.
This special issue will run as per the timeline given from submission to publication, while maintaining the rigorous peer review and high standards of the journal. All manuscripts submitted must be original, not under consideration elsewhere, and not previously published. A guide for authors and other relevant information for submission of manuscripts are available on the Guide for Authors’ page. Authors can expect their manuscripts to be reviewed fairly, and in a skilled, conscientious manner. To enhance objectivity, and to guarantee high scientific quality and relevance to the subject, three peer reviewers will be selected to evaluate a manuscript. The peer review process shall be designed to avoid bias and conflict of interest on the part of reviewers and shall be composed of experts in the relevant field of research. A key criterion in publication decisions will be the manuscript’s fit for the special issue and the readership of KBS. Papers will be published online as soon as accepted in continuous flow.
Submission Instructions
The submission system will be open around one week before the first paper comes in. When submitting your manuscript please select the article type “VSI: Deep Learning”. Please submit your manuscript before the submission deadline.
All submissions deemed suitable to be sent for peer review will be reviewed by at least two independent reviewers. Once your manuscript is accepted, it will go into production, and will be simultaneously published in the current regular issue and pulled into the online Special Issue. Articles from this Special Issue will appear in different regular issues of the journal, though they will be clearly marked and branded as Special Issue articles.
Please see an example here: https://www.sciencedirect.com/journal/science-of-the-total-environment/special-issue/10SWS2W7VVV
Please ensure you read the Guide for Authors before writing your manuscript. The Guide for Authors and the link to submit your manuscript is available on the Journal’s homepage.