Background and motivation


Scope of application

Over the past decades, the study of systems of particles has become an important part of many research areas, from theoretical physics to applied biology and computational mathematics. Some current research trends include the following.

  • Since the 80’s, there is a joint effort from biologists, physicists, computer graphics scientists and more recently mathematicians to propose accurate models for large self-organized animal societies. While the motivations of the different communities are largely independent, a commmon goal is to be able to explain and reproduce the emergence of complex patterns from simple interaction rules. Typical examples of complex systems include flocks of birds, fish schools, herds of mammals or ant colonies. It seems reasonable to consider that none of these animals has an accute consciousness of the whole system organization but rather follows simple behavioral patterns: stay close to the group, do not collide with another individual, copy the attitude of the close neighbours etc. These models are often called swarming models or collective dynamics models [1, 4, 13, 41].

  • The microscopic world if full of complex systems made of many simple entities. Colonies of bacteria [42] are a natural example reminiscent of the macroscopic animal societies described above. But complex patterns can also emerge from systems of non-living particles. Our body is actually a very complex self-organized assembly of cells which dictate every aspect of our life: our brain works thanks to the communications between billions of neurons [2, 21], the reproduction is based on the competition between millions of spermatozoa [9] and sometimes, death is sadly caused by the relentless division of cancer cells. Unlike the previous models, the communication between the particles cannot be based on their visual perception, but rather on chemical agents, on physical (geometrical) constraints etc.

  • On a completely different side, there is an ever growing number of methods in computational mathematics which are based on the simulation of systems of particles. The pioneering Particle Swarm Optimization method [29] has shown how biologically-inspired artificial systems of particles can be used to solve tough optimization problems. Following these ideas, other Swarm Intelligence optimization algorithms have been proposed up to very recently [22, 33, 38, 39]. Since the beginning of the 2000’s, particle systems are also at the core of filtering [10, 17, 18, 20, 28] and sampling methods [3, 8]. Recently, a particle-based interpretation [7, 12, 31, 34, 35] of the training task of neural networks has lead to new theoretical convergence results.

Why do we need to simulate particle systems?

Beyond the self-explanatory applications for particle-based algorithms in computational mathematics, the simulation of systems of particles is also a crucial modelling tool in Physics and Biology.

  • On the one hand, field experiments are useful to collect data and trigger new modelling ideas. On the other hand, numerical simulations become necessary to test these ideas and to calibrate the models. Numerical experiments can be conducted to identify which mechanisms are able to produce a specific phenomena in a controlled environment [6].

  • On a more theoretical side, the direct mathematical study of particle systems can rapidly become incredibly difficult. Inspired by the kinetic theory of gases, many mesoscopic and macroscopic models have been proposed to model the average statistical behavior of particle systems rather than the individual motion of each particle [4, 16, 37]. Within this framework, a particle system is rather seen as a fluid and it is decribed by Partial Differential Equations (PDE) reminiscent from the theory of fluid dynamics. PDE models are more easily theoretically and numerically tractable. However, cheking the validity of these models is not always easy and is sometimes only postulated based on phenomenological considerations. In order to design good models, it is often necessary to go back-and-forth between the PDE models and the numerical simulation of the underlying particle systems.

The development of the SiSyPHE library was initially motivated by the study of body-oriented particles [15]. The (formal) derivation of a macroscopic PDE model from the particle system has lead to a novel conjecture which postulates the existence of a class of so-called bulk topological states [14]. The quantitative comparison between this theoretical prediction and the numerical simulation of the particle system in a suitable regime (with more than \(10^6\) particles) has confirmed the existence of these new states of matter. The study of their physical properties which are observed in the numerical experiments but not readily explained by the PDE model is an ongoing work.

Mean-field particle systems

Currently, the models implemented in the SiSyPHE library belong to the family of mean-field models. It means that the motion of each particle is influenced by the average behavior of the whole system. The Vicsek model [16, 40] and the Cucker-Smale model [4, 11, 24] are two popular examples of mean-field models where each particle tries to move in the average direction of motion of its neighbours (it produces a so-called flocking behavior).

The mathematical point of view

From a mathematical point of view, a mean-field particle system with \(N\) particles is defined as a Markov process in \((\mathbb{R}^d)^N\) with a generator \(\mathcal{L}_N\) whose action on a test function \(\varphi_N\) is of the form:

\[\mathcal{L}_N\varphi_N(x^1,\ldots,x^N) = \sum_{i=1}^N L_{\mu_{\mathbf{x}^N}}\diamond_i \varphi_N (x^1,\ldots,x^N),\]

where \(\mathbf{x}^N = (x^1,\ldots,x^N)\in (\mathbb{R}^d)^N\) and

\[\mu_{\mathbf{x}^N} := \frac{1}{N}\sum_{i=1}^N \delta_{x^i},\]

is the so-called empirical measure. The operator \(L_{\mu_{\mathbf{x}_N}}\) depends on the empirical measure and acts on one-variable test functions on \(\mathbb{R}^d\). Given an operator \(L\) and a test function \(\varphi_N\), the notation \(L\diamond_i \varphi_N\) denotes the function

\[L\diamond_i \varphi_N : (x^1,\ldots,x^N) \in (\mathbb{R}^d)^N \mapsto L[x\mapsto\varphi_N(x_1,\ldots,x^{i-1},x,x^{i+1},\ldots,x^N)](x^i)\in \mathbb{R}.\]

The dynamics of the mean-field system depends on the operator \(L_{\mu_{\mathbf{x}_N}}\) which is typically either a diffusion operator [16, 24] or a jump operator [2, 19]. In the first case, the mean-field particle systems can be alternatively defined by a system of \(N\) coupled Stochastic Differential Equations of the form:

\[\mathrm{d}X^i_t = b(X^i_t,\mu_{\mathcal{X}^N_t})\mathrm{d}t + \sigma(X^i_t,\mu_{\mathcal{X}^N_t})\mathrm{d} B^i_t,\quad i\in\{1,\ldots,N\},\]
\[\mu_{\mathcal{X}^N_t} := \frac{1}{N}\sum_{i=1}^N \delta_{X^i_t},\]

where \((B^i_t)_t\) are \(N\) independent Brownian motions and the coefficients \(b\) and \(\sigma\) are called the drift and diffusion coefficients. In most cases, these coefficients are linear or quasi-linear which means that they are of the form

\[b(X^i_t,\mu_{\mathcal{X}^N_t}) = \tilde{b}{\left(X^i_t, \frac{1}{N}\sum_{j=1}^N K(X^i_t,X^j_t)\right)},\]

for two given functions \(\tilde{b}:\mathbb{R}^d\times\mathbb{R}^n\to\mathbb{R}^d\) and \(K:\mathbb{R}^d\times\mathbb{R}^d\to \mathbb{R}^n\).

When the particles are initially statistically independent, it can be shown that when \(N\to+\infty\), each particle \(X^i_t\) converges towards an independent copy of the solution of the so-called McKean-Vlasov diffusion process [30, 32, 36] defined by the Stochastic Differential Equation

\[\mathrm{d}\overline{X}_t = b(\overline{X}_t,f_t)\mathrm{d}t + \sigma(\overline{X}_t,f_t)\mathrm{d} B_t,\]

where \((B_t)_t\) is a Brownian motion and \(f_t\) is the law of the process \(\overline{X}_t\). It satisfies the Fokker-Planck Partial Differential Equation

\[\partial_t f_t = - \nabla\cdot(b(x,f_t)f_t) + \frac{1}{2}\sum_{i,j=1}^N \partial_{x_i}\partial_{x_j}(a_{ij}(x,f_t)f_t),\]

with \(x=(x_1,\ldots,x_d)\in \mathbb{R}^d\) and \(a=\sigma\sigma^\mathrm{T}\). This phenomenon is called propagation of chaos, following the terminology introduced by Kac in the 50’s [25, 27, 30, 32, 36].


A popular example of mean-field particle system is the (stochastic) Cucker-Smale model [4, 11, 24]. Each particle is defined by its position \(X^i_t \in\mathbb{R}^d\) and its velocity \(V^i_t\in\mathbb{R}^d\) which evolve according to the system of \(2N\) Stochastic Differential Equations:

\[\mathrm{d}X^i_t = V^i_t\mathrm{d}t,\quad \mathrm{d}V^i_t = \frac{1}{N}\sum_{i=1}^N K(|X^j_t-X^i_t|)(V^j_t-V^i_t)\mathrm{d}t + \mathrm{d}B^i_t,\]

where \(K:[0,+\infty)\to[0,+\infty)\) is an observation kernel which models the visual perception of the particles. In this example, the function \(K\) is a smooth function vanishing at infinity and the communication between the particles is based on the distance between them. The motion of the particles follows the Newton’s laws of motion with an additional stochastic term. The term \(V^j_t-V^i_t\) is a relaxation force (for a quadratic potential) which tends to align the velocities of particle \(i\) and particle \(j\).

Simulating mean-field particle systems

On a computer, the above examples which are time-continuous needs to be discretized, using for instance one of the classical numerical schemes for Stochastic Differential Equations. Then the difficulty lies in the evaluation of the empirical measure at each time-step of the numerical scheme.

In the example of the Cucker-Smale model, the force exerted on a particle is the sum of \(N\) small relaxation forces of order \(1/N\). The total number of operations required is thus of order \(\mathcal{O}(N)\) for each particle. Since there are \(N\) particles, the total time complexity of the algorithm is thus \(\mathcal{O}(N^2T)\) where \(T\) is the total number of iterations of the numerical scheme.

This quadratic cost is the main bottleneck in the simulation of mean-field particle systems. As explained in the documentation of the KeOps library, the evaluation of the \(N\) forces at time \(t\)

\[F_i(t) = \frac{1}{N}\sum_{i=1}^N K(|X^j_t-X^i_t|)(V^j_t-V^i_t), \quad i\in\{1,\ldots,N\},\]

is called a kernel operation and can be understood as a discrete convolution operation (or matrix-vector product) between the matrix of distances \((K_{ij})_{i,j\in\{1\,\ldots,N\}}\) where \(K_{ij} = K(|X^j_t-X^i_t|)\) and the vector of velocities \((V^j_t-V^i_t)_{j\in\{1,\ldots, N\}}\). When \(N\) is large (say \(N>10^4\)), such operation is too costly even for array-based programming languages such as Matlab: the \(N\times N\) kernel matrix \((K_{ij})_{i,j}\) would simply not fit into the memory. With lower level languages (Fortran, C), this operation can be implemented more efficiently with two nested loops but with a significantly higher global coding effort and with less versatility.

Over the past decades, several workarounds have been proposed. Popular methods include

  • the low-rank decomposition of the kernel matrix,

  • the fast-multipole methods [23] which to treat differently short- and long-range interactions,

  • the Verlet list method which is based on a grid decompostion of the spatial domain to reduce the problem to only short-range interactions between subsets of the particle system,

  • the Random Batch Method [26] which is based on a stochastic approximation where only interactions between randomly sampled subsets (batches) of the particle system are computed.

All these methods require an significant amount of work, either in terms of code or to justify the approximation procedures.

The SiSyPHE library

The present implementation is based on recent libraries originally developed for machine learning purposes to significantly accelerate such tensor (array) computations, namely the PyTorch package and the KeOps library [5]. Using the KeOps framework, the kernel matrix is a symbolic matrix defined by a mathematical formula and no approximation is required in the computation of the interactions (up to the time discretization). The SiSyPHE library speeds up both traditional Python and low-level CPU implementations by one to three orders of magnitude for systems with up to several millions of particles. Although the library is mainly intended to be used on a GPU, the implementation is fully functional on the CPU with a significant gain in efficiency.

Moreover, the versatile object-oriented Python interface is well suited to the comparison and study of new and classical many-particle models. This aspect is fundamental in applications in order to conduct ambitious numerical experiments in a systematic framework, even for particles with a complex structure and with a significantly reduced computational cost



G. Albi, N. Bellomo, L. Fermo, S.-Y. Ha, J. Kim, L. Pareschi, D. Poyato, and J. Soler. Vehicular traffic, crowds, and swarms: From kinetic theory and multiscale methods to applications and research perspectives. Math. Models Methods Appl. Sci., 29(10):1901–2005, 2019. URL:, doi:10.1142/S0218202519500374.


L. Andreis, P. Dai Pra, and M. Fischer. McKean–Vlasov limit for interacting systems with simultaneous jumps. Stoch. Anal. Appl., 36(6):960–995, 2018. doi:10.1080/07362994.2018.1486202.


O. Cappé, A. Guillin, J. M. Marin, and C. P. Robert. Population Monte Carlo. J. Comput. Graph. Statist., 13(4):907–929, 2004. URL:, doi:10.1198/106186004X12803.


J. A. Carrillo, M. Fornasier, G. Toscani, and F. Vecil. Particle, kinetic, and hydrodynamic models of swarming. In G. Naldi, L. Pareschi, and G. Toscani, editors, Mathematical Modeling of Collective Behavior in Socio-Economic and Life Sciences, pages 297–336. Birkhäuser Boston, 2010. URL:, doi:10.1007/978-0-8176-4946-3_12.


B. Charlier, J. Feydy, J. A. Glaunès, F.-D. Collin, and G. Durif. Kernel Operations on the GPU, with Autodiff, without Memory Overflows. J. Mach. Learn. Res., 22(74):1–6, 2021.


H. Chaté, F. Ginelli, G. Grégoire, and F. Raynaud. Collective motion of self-propelled particles interacting without cohesion. Phys. Rev. E, 77(4):046113, 2008. URL:, doi:10.1103/PhysRevE.77.046113.


L. Chizat and F. Bach. On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31 (NeurIPS 2018), 3040–3050. Montreal, Canada, 2018. Curran Associates, Inc.


G. Clarté, A. Diez, and J. Feydy. Collective Proposal Distributions for Nonlinear MCMC samplers: Mean-Field Theory and Fast Implementation. arXiv preprint: arXiv:1909.08988, 2021. URL:


A. Creppy, O. Praud, X. Druart, P. L. Kohnke, and F. Plouraboué. Turbulence of swarming sperm. Phys. Rev. E, 92(3):032722, 2015. URL:, doi:10.1103/PhysRevE.92.032722.


Dan Crisan and Arnaud Doucet. A survey of convergence results on particle filtering methods for practitioners. IEEE Trans. Signal Process., 50(3):736–746, 2002. URL: (visited on 2021-05-12), doi:10.1109/78.984773.


F. Cucker and S. Smale. On the mathematics of emergence. Jpn. J. Math., 2(1):197–227, 2007. URL:, doi:10.1007/s11537-007-0647-x.


V. De Bortoli, A. Durmus, X. Fontaine, and U. Simsekli. Quantitative Propagation of Chaos for SGD in Wide Neural Networks. arXiv preprint: arXiv:2007.06352, 2020. URL:


P. Degond. Mathematical models of collective dynamics and self-organization. In Proceedings of the International Congress of Mathematicians ICM 2018, volume 4, 3943–3964. Rio de Janeiro, Brazil, August 2018. doi:10.1142/9789813272880_0206.


P. Degond, A. Diez, and M. Na. Bulk topological states in a new collective dynamics model. arXiv preprint: arXiv:2101.10864, 2021. URL:


P. Degond, A. Frouvelle, S. Merino-Aceituno, and A. Trescases. Alignment of Self-propelled Rigid Bodies: From Particle Systems to Macroscopic Equations. In G. Giacomin, S. Olla, E. Saada, H. Spohn, and G. Stoltz, editors, Stochastic Dynamics Out of Equilibrium, Institut Henri Poincaré, Paris, France, 2017, number 282 in Springer Proceedings in Mathematics & Statistics, pages 28–66. Springer, Cham, 2019. URL:, doi:10.1007/978-3-030-15096-9_2.


P. Degond and S. Motsch. Continuum limit of self-driven particles with orientation interaction. Math. Models Methods Appl. Sci., 18(Suppl.):1193–1215, 2008. URL:, doi:10.1142/S0218202508003005.


P. Del Moral. Feynman-Kac Formulae, Genealogical and Interacting Particle Systems with Applications. Probability and Its Applications. Springer-Verlag New York, 2004.


P. Del Moral. Mean field simulation for Monte Carlo integration. Number 126 in Monographs on Statistics and Applied Probability. CRC Press, Taylor & Francis Group, 2013. ISBN 9781466504059.


G. Dimarco and S. Motsch. Self-alignment driven by jump processes: Macroscopic limit and numerical investigation. Math. Models Methods Appl. Sci., 26(07):1385–1410, 2016. URL:, doi:10.1142/S0218202516500330.


Arnaud Doucet, Nando Freitas, and Neil Gordon, editors. Sequential Monte Carlo Methods in Practice. Information Science and Statistics. Springer-Verlag New York, 2001. ISBN 978-0-387-95146-1. URL: (visited on 2021-05-13), doi:10.1007/978-1-4757-3437-9.


N. Fournier and E. Löcherbach. On a toy model of interacting neurons. Ann. Inst. Henri Poincaré Probab. Stat., 2016. URL:, doi:10.1214/15-AIHP701.


S. Grassi and L. Pareschi. From particle swarm optimization to consensus based optimization: stochastic modeling and mean-field limit. Math. Models Methods Appl. Sci., 31(8):1625–1657, 2020. URL:, doi:10.1142/s0218202521500342.


L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comput. Phys., 73(2):325–348, 1987. URL:, doi:10.1016/0021-9991(87)90140-9.


S.-Y. Ha, K. Lee, and D. Levy. Emergence of time-asymptotic flocking in a stochastic Cucker-Smale system. Commun. Math. Sci., 7(2):453–469, 2009. doi:10.4310/cms.2009.v7.n2.a9.


M. Hauray and S. Mischler. On Kac's chaos and related problems. J. Funct. Anal., 266:6055–6157, 2014. doi:10.1016/j.jfa.2014.02.030.


S. Jin, L. Li, and J.-G. Liu. Random batch methods (RBM) for interacting particle systems. arXiv preprint: arXiv:1812.10575, 2019. URL:, doi:10.1016/


Mark Kac. Foundations of kinetic theory. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, volume 3, 171–197. University of California Press Berkeley and Los Angeles, California, 1956.


Nikolas Kantas, Arnaud Doucet, Sumeetpal Sidhu Singh, and Jan M. Maciejowski. An Overview of Sequential Monte Carlo Methods for Parameter Estimation in General State-Space Models. IFAC Proceedings Volumes, 42(10):774–785, 2009. URL: (visited on 2021-05-12), doi:10.3182/20090706-3-FR-2004.00129.


J. Kennedy and R. Eberhart. Particle swarm optimization. In Proceedings of ICNN'95 - International Conference on Neural Networks, volume 4, 1942–1948. Perth, WA, Australia, 1995. IEEE. URL:, doi:10.1109/ICNN.1995.488968.


H. P. McKean. Propagation of chaos for a class of non-linear parabolic equations. In A. K. Aziz, editor, Lecture Series in Differential Equations, Volume 2, number 19 in Van Nostrand Mathematical Studies, pages 177–194. Van Nostrand Reinhold Company, 1969.


S. Mei, A. Montanari, and P.-M. Nguyen. A mean field view of the landscape of two-layer neural networks. Proc. Natl. Acad. Sci. USA, 115(33):E7665–E7671, 2018. URL:, doi:10.1073/pnas.1806579115.


S. Méléard. Asymptotic Behaviour of some interacting particle systems; McKean-Vlasov and Boltzmann models. In D. Talay and L. Tubaro, editors, Probabilistic Models for Nonlinear Partial Differential Equations, number 1627 in Lecture Notes in Mathematics. Springer-Verlag Berlin Heidelberg, 1996. doi:10.1007/bfb0093177.


R. Pinnau, C. Totzeck, O. Tse, and S. Martin. A consensus-based model for global optimization and its mean-field limit. Math. Models Methods Appl. Sci., 27(01):183–204, 2017. URL:, doi:10.1142/S0218202517400061.


G. M. Rotskoff and E. Vanden-Eijnden. Trainability and Accuracy of Neural Networks: An Interacting Particle System Approach. arXiv preprint: arXiv:1805.00915, 2019. URL:


J. Sirignano and K. Spiliopoulos. Mean Field Analysis of Neural Networks: A Law of Large Numbers. SIAM J. Appl. Math., 80(2):725–752, 2020. URL:, doi:10.1137/18M1192184.


A.-S. Sznitman. Topics in propagation of chaos. In Éc. Été Probab. St.-Flour XIX—1989, pages 165–251. Springer, 1991. doi:10.1007/bfb0085169.


J. Toner and Y. Tu. Flocks, herds, and schools: A quantitative theory of flocking. Phys. Rev. E, 58(4):4828–4858, 1998. URL:, doi:10.1103/PhysRevE.58.4828.


C. Totzeck. Trends in Consensus-based optimization. arXiv preprint: arXiv:2104.01383, 2021. URL:


C. Totzeck, R. Pinnau, S. Blauth, and S. Schotthöfer. A Numerical Comparison of Consensus‐Based Global Optimization to other Particle‐based Global Optimization Schemes. PAMM. Proc. Appl. Math. Mech., 2018. URL:, doi:10.1002/pamm.201800291.


T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet. Novel Type of Phase Transition in a System of Self-Driven Particles. Phys. Rev. Lett., 75(6):1226–1229, 1995. URL:, doi:10.1103/PhysRevLett.75.1226.


T. Vicsek and A. Zafeiris. Collective motion. Phys. Rep., 517(3-4):71–140, 2012. URL:, doi:10.1016/j.physrep.2012.03.004.


H. H. Wensink, J. Dunkel, S. Heidenreich, K. Drescher, R. E. Goldstein, H. Lowen, and J. M. Yeomans. Meso-scale turbulence in living fluids. Proc. Natl. Acad. Sci. USA, 109(36):14308–14313, 2012. URL:, doi:10.1073/pnas.1202032109.