Cooperative distributed control is a relatively novel and rapidly developing area of control theory and engineering. Instead of centralized, large systems are considered composed of autonomous subsystems, with local computation and communication capabilities. The broad aim is solving classical problems e.g. stabilization, tracking, estimation and optimization, via local communication and team cooperation robust to changes in communication topology and disturbance. Relevant topics of classical control theory are revisited and a brief review of background mathematics needed for the course is also provided. The potential use of multi-agent cooperation in challenging applications involving environment to be controlled or observed is discussed. Theory: Review of qualitative properties of dynamical systems, Motivation for distributed multi-agent systems, Elements of algebraic graph theory, Distributed estimation and control, Consensus and synchronization of linear/nonlinear, continuous/discrete-time systems, Cooperative stability, optimality and robustness, Distributed optimization: multi-player game theory, Interactions with environment.