Model-free control of direct air capture: Optimal rule-based policies tuned via Bayesian optimization

article
Direct Air Capture is a promising carbon dioxide removal technology, offering safe, flexible, and scalable negative emissions. However, the capture costs and productivity of the Direct Air Capture process are highly influenced by fluctuations in weather conditions, such as ambient temperature and humidity, which hinder its large-scale adoption. Thus, in order to commercialize Direct Air Capture, it is vital to design a control method that reduces the capture costs and enhances productivity while considering the effects of local climate conditions. Due to the complexity of the coupled thermodynamic, heat, and mass transfer phenomena at the core of Direct Air Capture, the unavailability of a sufficiently efficient model, the intractability of parameter estimation, and the high computational costs of model-based approaches, we propose a model-free and data-driven rule-based control approach to improve the performance of Direct Air Capture and reduce its costs. The proposed rule-based control strategy functions as an online policy, continuously receiving feedback from ambient temperature and humidity, enabling dynamic adjustments. Therefore, to maximize the overall system performance, we need to obtain the optimal control law within the considered class of rule-based control schemes. To this end, we utilize a Bayesian optimization framework to optimally tune the parameters of the rule-based control strategy, leveraging data from the online operation influenced by climate conditions. We demonstrate that the proposed method increases the annual Direct Air Capture productivity by up to 16.7 % and lowers the annual capture costs by up to 10.3 % relative to the baseline method. We also observe that the proposed method achieves an annual productivity improvement of 13.59 % and an annual cost reduction of 9.30 % relative to the baseline in Amsterdam, outperforming a data-enabled predictive control method, which achieves 7.07 % productivity improvement and 8.25 % cost reduction, and a reinforcement learning-based controller, which achieves 3.63 % productivity improvement and 3.12 % cost reduction. © 2026 Elsevier Ltd
TNO Identifier
1029048
DOI
https://dx.doi.org/10.1016/j.conengprac.2026.106980
ISSN
09670661
Source
Control Engineering Practice, 173(NaN), pp. 1-14.
Publisher
Elsevier Ltd
Pages
1-14
Files
To receive the publication files, please send an e-mail request to TNO Repository.