[visionlist] Call For Papers: Sparsity in Neural Networks: Advancing Understanding and Practice

Wuyang Chen chenwydj at gmail.com
Thu May 20 18:32:41 -04 2021

We are excited to announce the workshop of “Sparsity in Neural Networks: Advancing Understanding and Practice”. Its inaugural version will take place online at July 8-9, 2021. 

This new workshop will bring together members of many communities working on neural network sparsity to share their perspectives and the latest cutting-edge research. We have assembled an incredible group of speakers, and we are seeking contributed work from the community. 

Attendance is free: please register at the workshop website: https://sites.google.com/view/sparsity-workshop-2021/home <https://sites.google.com/view/sparsity-workshop-2021/home> 

Submission and review will be handled by OpenReview. The link will be announced on the workshop website soon.
Important Dates
June 15, 2021 [AOE time]: Submit an abstract and supporting materials
June 25, 2021: Notification of acceptance
July 8-9, 2021: Workshop
Topics (including but not limited to)
Algorithms for Sparsity
Pruning both for post-training inference, and during training
Algorithms for fully sparse training (fixed or dynamic), including biologically inspired algorithms
Algorithms for ephemeral (activation) sparsity
Scaling up sparsity (e.g., large sparsely activated expert models)
Systems for Sparsity
Libraries, kernels, and compilers for accelerating sparse computation
Hardware with support for sparse computation
Theory and Science of Sparsity
When is overparameterization necessary (or not)
Optimization behavior of sparse networks
Representation ability of sparse networks
Sparsity and generalization
The stability of sparse models
Forgetting owing to sparsity, including fairness, privacy and bias concerns
Connecting neural network sparsity with traditional sparse dictionary modeling
Applications for Sparsity
Resource-efficient learning at the edge or the cloud
Data-efficient learning for sparse models
Communication-efficient distributed or federated learning with sparse models 
Graph and network science applications

This workshop is non-archival, and it will not have proceedings. We permit under-review or concurrent submissions. Submissions will receive one of three possible decisions:

Accept (Spotlight Presentation). The authors will be invited to present the work during the main conference, with live Q&A.
Accept (Poster Presentation). The authors will be invited to present their work as a poster during the workshop’s interactive poster sessions.
Reject. The paper will not be presented at the workshop.
Eligible Work
The latest research innovations at all stages of the research process, from work-in-progress to recently published papers
We define “recent” as presented within one year of the workshop, e.g., the manuscript is first publicly available on arxiv or else no earlier than July 9, 2020.
Position or survey papers on any topics relevant to this workshop (see above)

Required materials
One mandatory abstract (250 words or fewer) describing the work
One or more of the following accompanying materials that describe the work in further detail. Higher quality accompanying materials improve the likelihood of acceptance and of spotlighting work with an oral presentation.
A poster (in PDF form) presenting results of work-in-progress.
A link to a blog post (e.g., distill.pub, Medium) describing results.
A workshop paper of approximately four pages in length presenting results of work-in-progress. Papers should be submitted using the NeurIPS 2021 format.
A position paper with no page limit.
A published paper in the form that it was published. We will only consider papers that were published in the year prior to this workshop.

We hope you will join us in attendance!

Best Regards,
On behalf of the organizer team (Ari, Atlas, Jonathan, Utku, Michela, Siddhant, Elena, Chang, Trevor, Decebal, and Erich)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://visionscience.com/pipermail/visionlist_visionscience.com/attachments/20210520/90913746/attachment-0001.html>

More information about the visionlist mailing list