[visionlist] [CFP] Special Issue on Information Storage, Compression and Prediction in Deep Neural Networks

Nicola Catenacci akirapunk at gmail.com
Tue May 14 20:10:37 -04 2024


[Apologies for multiple posts; please distribute to interested people!]

Dear colleagues,

We invite you to submit to the special issue on "Mathematical Understanding
of Information Storage, Compression and Prediction in Neural Networks"
(more details can be found here:
https://www.frontiersin.org/research-topics/59300).


*- The journal(s) -*

The special issue is hosted by (alternatively):
- Frontiers in Computational Neuroscience (Impact Factor - 3.2, CiteScore -
4.8), or
- Frontiers in Neuroinformatics (Impact Factor - 3.5, CiteScore - 5.3).


*- Research Topic -*

Regardless of their success, today, deep neural networks (NNs) are still
used as a "black box", meaning the inner workings or detailed explanations
of how they arrive at their output are not revealed. Currently, we still do
not have the mathematical tools to fully understand the formal properties
of these networks and their limitations. Improving our theoretical
understanding of NNs is particularly important today, as these tools are
being deployed in an increasing number of safety-critical scenarios,
necessitating constant scrutiny of their behavior.

The goal of the special issue is to investigate the mathematical frameworks
that enable a better understanding of how information is learned and
represented within a neural network, including the study of existing
approaches that go in this direction.


*- Call for Papers -*

We invite researchers to present manuscripts focusing on the mathematical
analysis of deep neural networks, including their information-theoretic
interpretation and their statistical limits. The areas relevant to this
special issue include, but are not limited to:

- Theory of Deep Feed-forward and Recurrent NNs
- Information-theoretic principles and interpretation of NNs
- The Information Bottleneck and deep learning
- Compression in Deep Neural Networks
- The analysis of pattern and memory storage in NNs
- Deep NNs for brain-inspired machine learning and biological modeling
- Statistical Physics of deep neural networks
- Dynamical Systems Modeling of NNs
- Neural Network Dynamics and Stability
- Generalization and Regularization in NNs
- Learning Theory and Neural Networks
- Mathematical Models of Learning and Plasticity
- Neural Network Interpretability and Explainability
- Energy-Based Models in Deep Learning
- Neural Network Compression, Pruning, Sparsity and Efficient Computing
- Mathematics of Self-Supervised Deep Learning
- Optimization Landscapes and Loss Surface Analysis
- Neural Network Generalization and Overparameterization
- Mathematical Theories of Transformers and Attention Mechanisms
- Theoretical Foundations of Transfer Learning and Domain Adaptation

Manuscript Submission Deadline: 27 October 2024.


*- Topic Editors -*

Giorgio Gosti, Italian National Research Council (CNR), Italy (
giorgio.gosti at cnr.it)
Nicola Catenacci Volpi, University of Hertfordshire, United Kingdom (
n.catenacci-volpi at herts.ac.uk)
Nilesh Goel, Birla Institute of Technology and Science, United Arab Emirates



*-----------------------------------*


Dr. Nicola Catenacci Volpi, PhD



Research Fellow in Information Theory for AI & Robotics

Adaptive Systems Research Group

The University of Hertfordshire

Department of Computer Science

College Lane

Hatfield, Hertfordshire AL10 9AB

United Kingdom

E-mail: n.catenacci-volpi at herts.ac.uk
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20240515/9a24e335/attachment.html>


More information about the visionlist mailing list