Submit a Manuscript to the Journal
Optimization Methods and Software
For a Special Issue on
Advances in Distributed Optimization
15 August 2022
15 September 2022
Advances in Distributed Optimization
Modern applications (e.g., based on Deep Learning models) require solving optimization problems with a tremendous number of variables and processing a massive amount of data. This and privacy issues lead to special requirements for the implemented optimization procedures with the leading role played by centralized or decentralized optimization problem formulations and algorithms. From the theoretical perspective, the main research questions deal with lower bounds for different problem classes and upper bounds for different algorithms. All the bounds may be in terms of the number of communication rounds, number of oracle calls, number of iterations. Many questions remain open despite significant recent progress, e.g., lower bounds and algorithms with the matching bounds for stochastic distributed optimization problems. The list of open questions includes but is not limited to lower bounds and optimal algorithms for decentralized optimization on time-varying networks, lower bounds and optimal algorithms for distributed saddle-point problems, lower bounds and optimal variance-reduced algorithms for empirical risk minimization problems. Another research direction is understanding how recent advances in non-distributed optimization, e.g., efficient zeroth-order or tensor methods, can be extended to the setting of distributed optimization. Finally, the setting of distributed optimization requires new models and approaches for describing the problem class and classes of algorithms to account for, e.g., local gradient steps between communications or optimization on time-varying networks. In particular, SI aims to include new provably optimal distributed optimization algorithms in a broad sense: decentralized convex optimization problems, convex-concave saddle-point problems, variational inequalities with monotone operators, where rich theory can be expected.
This SI welcomes both original research works and surveys on the interface of optimization and data science, with a particular focus on the challenges identified above. In more detail, possible topics include (but are not limited to):
- First- and zeroth-order methods for distributed optimization
- Tensor methods for distributed optimization
- Stochastic distributed optimization
- Optimization with intermittent communications
- Variance reduction for stochastic distributed optimization
- Min-max optimization (saddle-point problems) in the distributed setting
- Distributed variational inequalities
- Distributed optimization on time-varying networks
- Federated Learning and Personalized Federated Learning
- Quantization and compression in communications
- Distributed optimization under data similarity assumption
You are invited to submit your manuscript at any time before the submission deadline. The submission process will start with abstract submission (deadline is August, 15) and will finish with full manuscript submission (deadline is September, 15). For any inquiries about the appropriateness of contribution topics, please contact prof. Alexander Gasnikov via [email protected]. The size of the paper is determined in agreement with issue editors.
The journal’s submission platform is now available for receiving submissions to this Special Issue. Please refer to the Guide for Authors to prepare your manuscript, and select the article type of “Special Issue: AdvDO” when submitting your manuscript online.
View the latest tweets from tandfstem