Abonnement à la biblothèque: Guest

ISBN: 978-1-56700-537-0

ISBN Online: 978-1-56700-538-7

ISSN Online: 2377-424X

International Heat Transfer Conference 17
August, 14-18, 2023, Cape Town, South Africa

A NUMERICAL INVESTIGATION OF HEAT TRANSFER AND FLOW REDISTRIBUTION IN A 7×7 BALLOONED ROD BUNDLE

Get access (open in a dialog) DOI: 10.1615/IHTC17.330-100
10 pages

Résumé

The overheating of fuel rods is very common in many postulated nuclear reactor accidents. It may lead to cladding "swelling" or "ballooning", which potentially causes blockage of the fuel channels. In order to perform detailed safety analyses for these postulated accidents, it is crucial to understand the redistribution of the coolant flow and the associated changes in heat transfer near the blockage. In this work, both Unsteady Reynolds Averaged Navier-Stokes (URANS) and Large Eddy Simulation (LES) of a 7×7 square-lattice rod bundle with postulated ballooning of the nine central rods are carried out, aiming at producing high-fidelity Computational Fluid Dynamics data as practical benchmarks for the community to support engineering model development and application.

The simulations are first performed for an isothermal flow, where experimental data of the mean and Root Mean Square (RMS) fluctuating velocity profiles are available near the ballooning region (both upstream and downstream). The low Reynolds number k-ω SST turbulence model is used in the URANS simulation and the WALE model is used for LES. Through comparisons against experimental data, LES can predict more accurately the mean and RMS velocities than URANS, although the latter can also capture some 3-D asymmetric flow features in the wake downstream of the ballooning region. Secondly, a non-isothermal case based on a typical pressurised water reactor operating pressure and temperature is modelled. Considering the increase in Reynolds number, the LES mesh in the non-isothermal case is refined to reach a size of around a billion hexa-cells. To meet the high computational demand, the simulations are run on 512 nodes of a Tier-1 UK high-performance computing system, ARCHER2, so that high-fidelity data can be produced within a reasonably short time.