Model predictive control
Model predictive control (MPC) is an advanced method of process control that is used to control a process while satisfying a set of constraints. It has been in use in the process industries in chemical plants and oil refineries since the 1980s. In recent years it has also been used in power system balancing models[1] and in power electronics.[2] Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identification. The main advantage of MPC is the fact that it allows the current timeslot to be optimized, while keeping future timeslots in account. This is achieved by optimizing a finite time-horizon, but only implementing the current timeslot and then optimizing again, repeatedly, thus differing from Linear-Quadratic Regulator (LQR). Also MPC has the ability to anticipate future events and can take control actions accordingly. PID controllers do not have this predictive ability. MPC is nearly universally implemented as a digital control, although there is research into achieving faster response times with specially designed analog circuitry.[3]
Generalized predictive control (GPC) and dynamic matrix control (DMC) are classical examples of MPC.[4]
Overview
The models used in MPC are generally intended to represent the behavior of complex dynamical systems. The additional complexity of the MPC control algorithm is not generally needed to provide adequate control of simple systems, which are often controlled well by generic PID controllers. Common dynamic characteristics that are difficult for PID controllers include large time delays and high-order dynamics.
MPC models predict the change in the dependent variables of the modeled system that will be caused by changes in the independent variables. In a chemical process, independent variables that can be adjusted by the controller are often either the setpoints of regulatory PID controllers (pressure, flow, temperature, etc.) or the final control element (valves, dampers, etc.). Independent variables that cannot be adjusted by the controller are used as disturbances. Dependent variables in these processes are other measurements that represent either control objectives or process constraints.
MPC uses the current plant measurements, the current dynamic state of the process, the MPC models, and the process variable targets and limits to calculate future changes in the dependent variables. These changes are calculated to hold the dependent variables close to target while honoring constraints on both independent and dependent variables. The MPC typically sends out only the first change in each independent variable to be implemented, and repeats the calculation when the next change is required.
While many real processes are not linear, they can often be considered to be approximately linear over a small operating range. Linear MPC approaches are used in the majority of applications with the feedback mechanism of the MPC compensating for prediction errors due to structural mismatch between the model and the process. In model predictive controllers that consist only of linear models, the superposition principle of linear algebra enables the effect of changes in multiple independent variables to be added together to predict the response of the dependent variables. This simplifies the control problem to a series of direct matrix algebra calculations that are fast and robust.
When linear models are not sufficiently accurate to represent the real process nonlinearities, several approaches can be used. In some cases, the process variables can be transformed before and/or after the linear MPC model to reduce the nonlinearity. The process can be controlled with nonlinear MPC that uses a nonlinear model directly in the control application. The nonlinear model may be in the form of an empirical data fit (e.g. artificial neural networks) or a high-fidelity dynamic model based on fundamental mass and energy balances. The nonlinear model may be linearized to derive a Kalman filter or specify a model for linear MPC.
An algorithmic study by El-Gherwi, Budman, and El Kamel shows that utilizing a dual-mode approach can provide significant reduction in online computations while maintaining comparative performance to a non-altered implementation. The proposed algorithm solves N convex optimization problems in parallel based on exchange of information among controllers.[5]
Theory behind MPC
MPC is based on iterative, finite-horizon optimization of a plant model. At time the current plant state is sampled and a cost minimizing control strategy is computed (via a numerical minimization algorithm) for a relatively short time horizon in the future: . Specifically, an online or on-the-fly calculation is used to explore state trajectories that emanate from the current state and find (via the solution of Euler–Lagrange equations) a cost-minimizing control strategy until time . Only the first step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the new current state, yielding a new control and new predicted state path. The prediction horizon keeps being shifted forward and for this reason MPC is also called receding horizon control. Although this approach is not optimal, in practice it has given very good results. Much academic research has been done to find fast methods of solution of Euler–Lagrange type equations, to understand the global stability properties of MPC's local optimization, and in general to improve the MPC method.[6]
Principles of MPC
Model Predictive Control (MPC) are a multivariable control algorithm that uses:
- an internal dynamic model of the process
- a cost function J over the receding horizon
- an optimization algorithm minimizing the cost function J using the control input u
An example of a quadratic cost function for optimization is given by:
without violating constraints (low/high limits) with
- : th controlled variable (e.g. measured temperature)
- : th reference variable (e.g. required temperature)
- : th manipulated variable (e.g. control valve)
- : weighting coefficient reflecting the relative importance of
- : weighting coefficient penalizing relative big changes in
etc.
Nonlinear MPC
Nonlinear Model Predictive Control, or NMPC, is a variant of model predictive control (MPC) that is characterized by the use of nonlinear system models in the prediction. As in linear MPC, NMPC requires the iterative solution of optimal control problems on a finite prediction horizon. While these problems are convex in linear MPC, in nonlinear MPC they are not necessarily convex anymore. This poses challenges for both NMPC stability theory and numerical solution.[7]
The numerical solution of the NMPC optimal control problems is typically based on direct optimal control methods using Newton-type optimization schemes, in one of the variants: direct single shooting, direct multiple shooting methods, or direct collocation.[8] NMPC algorithms typically exploit the fact that consecutive optimal control problems are similar to each other. This allows to initialize the Newton-type solution procedure efficiently by a suitably shifted guess from the previously computed optimal solution, saving considerable amounts of computation time. The similarity of subsequent problems is even further exploited by path following algorithms (or "real-time iterations") that never attempt to iterate any optimization problem to convergence, but instead only take a few iterations towards the solution of the most current NMPC problem, before proceeding to the next one, which is suitably initialized; see, e.g.,.[9]
While NMPC applications have in the past been mostly used in the process and chemical industries with comparatively slow sampling rates, NMPC is being increasingly applied, with advancements in controller hardware and computational algorithms, e.g., preconditioning,[10] to applications with high sampling rates, e.g., in the automotive industry, or even when the states are distributed in space (Distributed parameter systems).[11] As an application in aerospace, recently, NMPC has been used to track optimal terrain-following/avoidance trajectories in real-time.[12]
Explicit MPC
Explicit MPC (eMPC) allows fast evaluation of the control law for some systems, in stark contrast to the online MPC. Explicit MPC is based on the parametric programming technique, where the solution to the MPC control problem formulated as optimization problem is pre-computed offline.[13] This offline solution, i.e., the control law, is often in the form of a piecewise affine function (PWA), hence the eMPC controller stores the coefficients of the PWA for each a subset (control region) of the state space, where the PWA is constant, as well as coefficients of some parametric representations of all the regions. Every region turns out to geometrically be a convex polytope for linear MPC, commonly parameterized by coefficients for its faces, requiring quantization accuracy analysis.[14] Obtaining the optimal control action is then reduced to first determining the region containing the current state and second a mere evaluation of PWA using the PWA coefficients stored for all regions. If the total number of the regions is small, the implementation of the eMPC does not require significant computational resources (compared to the online MPC) and is uniquely suited to control systems with fast dynamics.[15] A serious drawback of eMPC is exponential growth of the total number of the control regions with respect to some key parameters of the controlled system, e.g., the number of states, thus dramatically increasing controller memory requirements and making the first step of PWA evaluation, i.e. searching for the current control region, computationally expensive.
Robust MPC
Robust variants of Model Predictive Control (MPC) are able to account for set bounded disturbance while still ensuring state constraints are met. There are three main approaches to robust MPC:
- Min-max MPC. In this formulation, the optimization is performed with respect to all possible evolutions of the disturbance.[16] This is the optimal solution to linear robust control problems, however it carries a high computational cost.
- Constraint Tightening MPC. Here the state constraints are enlarged by a given margin so that a trajectory can be guaranteed to be found under any evolution of disturbance.[17]
- Tube MPC. This uses an independent nominal model of the system, and uses a feedback controller to ensure the actual state converges to the nominal state.[18] The amount of separation required from the state constraints is determined by the robust positively invariant (RPI) set, which is the set of all possible state deviations that may be introduced by disturbance with the feedback controller.
- Multi-stage MPC. This uses a scenario-tree formulation by approximating the uncertainty space with a set of samples and the approach is non-conservative because it takes into account that the measurement information is available at every time stages in the prediction and the decisions at every stage can be different and can act as recourse to counteract the effects of uncertainties. The drawback of the approach however is that the size of the problem grows exponentially with the number of uncertainties and the prediction horizon.[19]
Commercially available MPC software
Commercial MPC packages are available and typically contain tools for model identification and analysis, controller design and tuning, as well as controller performance evaluation.
A survey of commercially available packages has been provided by S.J. Qin and T.A. Badgwell in Control Engineering Practice 11 (2003) 733–764.
MPC vs. LQR
The main differences between MPC and LQR are that LQR optimizes in a fixed time window (horizon) whereas MPC optimizes in a receding time window,[4] and that a new solution is computed often whereas LQR uses the single (optimal) solution for the whole time horizon. Therefore, MPC typically solves the optimization problem in smaller time windows than the whole horizon and hence may obtain a suboptimal solution. However because MPC makes no assumptions about linearity, it can handle hard constraints as well as migration of a nonlinear system away from its linearized operating point, both of which are downsides of LQR.
References
- Michèle Arnold, Göran Andersson. "Model Predictive Control of energy storage including uncertain forecasts" https://www.pscc-central.org/uploads/tx_ethpublications/fp292.pdf
- Tobias Geyer: Model predictive control of high power converters and industrial drives, Wiley, London, ISBN 978-1-119-01090-6, Nov. 2016.
- Vichik, Sergey; Borrelli, Francesco (2014). "Solving linear and quadratic programs with an analog circuit". Computers & Chemical Engineering. 70: 160–171. doi:10.1016/j.compchemeng.2014.01.011.
- Wang, Liuping (2009). Model Predictive Control System Design and Implementation Using MATLAB®. Springer Science & Business Media. pp. xii.
- Al-Gherwi, Walid; Budman, Hector; Elkamel, Ali (3 July 2012). "A robust distributed model predictive control based on a dual-mode approach". Computers and Chemical Engineering. 50 (2013): 130–138. doi:10.1016/j.compchemeng.2012.11.002.
- Michael Nikolaou, Model predictive controllers: A critical synthesis of theory and industrial needs, Advances in Chemical Engineering, Academic Press, 2001, Volume 26, Pages 131-204
- An excellent overview of the state of the art (in 2008) is given in the proceedings of the two large international workshops on NMPC, by Zheng and Allgower (2000) and by Findeisen, Allgöwer, and Biegler (2006).
- J.D. Hedengren; R. Asgharzadeh Shishavan; K.M. Powell; T.F. Edgar (2014). "Nonlinear modeling, estimation and predictive control in APMonitor". Computers & Chemical Engineering. 70 (5): 133–148. doi:10.1016/j.compchemeng.2014.04.013.
- Ohtsuka, Toshiyuki (2004). "A continuation/GMRES method for fast computation of nonlinear receding horizon control". Automatica. 40 (4): 563–574. doi:10.1016/j.automatica.2003.11.005.
- Knyazev, Andrew; Malyshev, Alexander (2016). "Sparse preconditioning for model predictive control". 2016 American Control Conference (ACC). pp. 4494–4499. arXiv:1512.00375. doi:10.1109/ACC.2016.7526060. ISBN 978-1-4673-8682-1. S2CID 2077492.
- M.R. García; C. Vilas; L.O. Santos; A.A. Alonso (2012). "A Robust Multi-Model Predictive Controller for Distributed Parameter Systems" (PDF). Journal of Process Control. 22 (1): 60–71. doi:10.1016/j.jprocont.2011.10.008.
- R. Kamyar; E. Taheri (2014). "Aircraft Optimal Terrain/Threat-Based Trajectory Planning and Control". Journal of Guidance, Control, and Dynamics. 37 (2): 466–483. Bibcode:2014JGCD...37..466K. doi:10.2514/1.61339.
- Bemporad, Alberto; Morari, Manfred; Dua, Vivek; Pistikopoulos, Efstratios N. (2002). "The explicit linear quadratic regulator for constrained systems". Automatica. 38 (1): 3–20. doi:10.1016/s0005-1098(01)00174-1.
- Knyazev, Andrew; Zhu, Peizhen; Di Cairano, Stefano (2015). "Explicit model predictive control accuracy analysis". 2015 54th IEEE Conference on Decision and Control (CDC). pp. 2389–2394. arXiv:1509.02840. Bibcode:2015arXiv150902840K. doi:10.1109/CDC.2015.7402565. ISBN 978-1-4799-7886-1. S2CID 6850073.
- Klaučo, Martin; Kalúz, Martin; Kvasnica, Michal (2017). "Real-time implementation of an explicit MPC-based reference governor for control of a magnetic levitation system". Control Engineering Practice. 60: 99–105. doi:10.1016/j.conengprac.2017.01.001.
- Scokaert, P.O.M.; Mayne, D.Q. (1998). "Min-max feedback model predictive control for constrained linear systems". IEEE Transactions on Automatic Control. 43 (8): 1136–1142. doi:10.1109/9.704989.
- Richards, A.; How, J. (2006). "Robust stable model predictive control with constraint tightening". Proceedings of the American Control Conference.
- Langson, W.; I. Chryssochoos; S.V. Rakovic; D.Q. Mayne (2004). "Robust model predictive control using tubes". Automatica. 40 (1): 125–133. doi:10.1016/j.automatica.2003.08.009.
- Lucia, Sergio; Finkler, Tiago; Engell, Sebastian (2013). "Multi-stage nonlinear model predictive control applied to a semi-batch polymerization reactor under uncertainty". Journal of Process Control. 23 (9): 1306–1319. doi:10.1016/j.jprocont.2013.08.008.
Further reading
- Kwon, W. H.; Bruckstein, Kailath (1983). "Stabilizing state feedback design via the moving horizon method". International Journal of Control. 37 (3): 631–643. doi:10.1080/00207178308932998.
- Garcia, C; Prett, Morari (1989). "Model predictive control: theory and practice". Automatica. 25 (3): 335–348. doi:10.1016/0005-1098(89)90002-2.
- Findeisen, Rolf; Allgower, Frank (2001). "An introduction to nonlinear model predictive control". Summerschool on "The Impact of Optimization in Control", Dutch Institute of Systems and Control. C.W. Scherer and J.M. Schumacher, Editors.: 3.1–3.45.
- Mayne, D.Q.; Michalska, H. (1990). "Receding horizon control of nonlinear systems". IEEE Transactions on Automatic Control. 35 (7): 814–824. doi:10.1109/9.57020.
- Mayne, D; Rawlings; Rao; Scokaert (2000). "Constrained model predictive control: stability and optimality". Automatica. 36 (6): 789–814. doi:10.1016/S0005-1098(99)00214-9.
- Allgöwer; Zheng (2000). Nonlinear model predictive control. Progress in Systems Theory. 26. Birkhauser.
- Camacho; Bordons (2004). Model predictive control. Springer Verlag.
- Findeisen; Allgöwer, Biegler (2006). Assessment and Future Directions of Nonlinear Model Predictive Control. Lecture Notes in Control and Information Sciences. 26. Springer.
- Diehl, M; Bock; Schlöder; Findeisen; Nagy; Allgöwer (2002). "Real-time optimization and Nonlinear Model Predictive Control of Processes governed by differential-algebraic equations". Journal of Process Control. 12 (4): 577–585. doi:10.1016/S0959-1524(01)00023-3.
- James B. Rawlings, David Q. Mayne and Moritz M. Diehl: ”Model Predictive Control: Theory, Computation, and Design”(2nd Ed.), Nob Hill Publishing, LLC, ISBN 978-0975937730 (Oct. 2017).
- Tobias Geyer: Model predictive control of high power converters and industrial drives, Wiley, London, ISBN 978-1-119-01090-6, Nov. 2016
External links
- Case Study. Lancaster Waste Water Treatment Works, optimisation by means of Model Predictive Control from Perceptive Engineering
- ACADO Toolkit - Open Source Toolkit for Automatic Control and Dynamic Optimization providing linear and non-linear MPC tools. (C++, MATLAB interface available)
- μAO-MPC - Open Source Software package that generates tailored code for model predictive controllers on embedded systems in highly portable C code.
- jMPC Toolbox - Open Source MATLAB Toolbox for Linear MPC.
- Study on application of NMPC to superfluid cryogenics (PhD Project).
- Nonlinear Model Predictive Control Toolbox for MATLAB and Python
- Model Predictive Control Toolbox from MathWorks for design and simulation of model predictive controllers in MATLAB and Simulink
- Pulse step model predictive controller - virtual simulator
- Tutorial on MPC with Excel and MATLAB Examples
- GEKKO: Model Predictive Control in Python