Mobile QR Code QR CODE
Export citation EndNote

  1. 이화여자대학교 건축도시시스템공학과 대학원생 (Graduate Student, Department of Architectural and Urban Systems Engineering, Ewha Womans University, Seoul 03760, Rep. of Korea)
  2. 이화여자대학교 건축도시시스템공학과 교수 (Professor, Department of Architectural and Urban Systems Engineering, Ewha Womans University, Seoul 03760, Rep. of Korea)



콘크리트, 고온, 혼화재, 압축강도, 머신러닝 기법
concrete, high temperature, supplementary cementitious material (SCM), strength, machine learning

1. Introduction

Concrete has been predominantly used in construction due to its cost-effectiveness and durability. Although concrete is also known to be fire-resistant due to its low conductivity, its strength is sensitive to temperature changes (Ma et al. 2015). Additionally, it varies depending on the supplementary cementitious material (SCM) content, such as fly ash and slag, which are increasingly being used as partial replacements for cement to reduce environmental impact (Ramzi and Hajiloo 2023). Therefore, accurately predicting strength changes in concrete with varying SCM content when exposed to high temperatures is crucial for assessing its fire resistance performance.

Recently, new approaches such as machine learning (ML) have been introduced, to elucidate and predict concrete behaviors. Various studies have attempted to use ML models to predict concrete compressive strengths, capture sensitivity, and overcome the limitations of traditional statistical analysis. Farooq et al. (2020) adopted gene expression programming (GEP) and random forest (RF) algorithms to predict the compressive strength of high-strength concrete. Ahmad et al. (2021a) predicted the compressive strength of fly ash-based concrete using decision trees (DT), ensemble bagging approach, GEP, and K-Fold cross-validation methods. In particular, several studies (Li and Song 2022; Rathakrishnan et al. 2022; Sapkota et al. 2024; Vo et al. 2024) have used ensemble models to better predict the compressive strength of concrete by combining several weak learners. Vo et al. (2024) utilized adaptive boosting (AdaBoost), gradient boosting regression trees (GBRT), extreme gradient boosting (XGBoost), and category gradient boosting (CatBoost), and highlighted XGBoost’s superior performance. Rathakrishman et al. (2022) predicted and compared the compressive strength of concrete replaced with a high volume ground granulated blast furnace slag using light gradient boosting machine (LGBM), CatBoost, gradient boosting regressor (GBR), AdaBoost, and XGBoost methods. Later, Sapkota et al. (2024) proved that CatBoost was more suitable for predicting normal concrete compressive strength than RF.

Compared to ML studies on predicting compressive strength of concrete under a normal environment, relatively few research have been reported on concrete exposed to high temperatures. Ahmad et al. (2021b) compared the predictive accuracy of individual ML models with those from ensemble ML models in estimating concrete compressive strength at high temperatures. They employed DT and artificial neural networks (ANNs) as the individual models and bagging regressors and GBR as the ensemble models. Their findings revealed that the ensemble models had better predictive accuracy than individual models. Similarly, Ahmad et al. (2021c) examined the changes in the compressive strength of concrete heated to high temperatures using three ML methods: DT (individual model), and AdaBoost and RF (ensemble models). They conducted comparative and sensitivity analyses, which revealed that AdaBoost possesses the most effective predictive performance among the three (Ahmad et al. 2021c).

Despite the importance and necessity of research evaluating the fire resistance performance of concrete, there are lack of studies about ML to assess the fire resistance of structures. The previous studies considered limited features, thus not enough to include a diverse range of parameters influencing strength of fire damaged concrete. Moreover, comparative studies for ensemble models have not been sufficiently conducted, which makes hard to adopt the most recent ML techniques in fire safety engineering field.

Hence, this study endeavored to propose a ML model the most suitable to predict the compressive strength of concrete heated to high temperatures among various ensemble models. The model was developed using diverse dataset encompassing different parameters such as normal or high strength, mix ratios of admixture, heating temperatures, and cooling period after heating. In addition to investigating predictive accuracy and conducting sensitivity analyses, this study validated the proposed model by further applying a new dataset to the model and comparing the predicted and experimental values. Ultimately, this research can facilitate the use of ML models to predict the fire resistance of concrete structures by estimating changes in the compression strength of concrete exposed to high temperatures.

2. Machine Learning Algorithms

Among many ML models, one notable type is the ensemble model, which outperforms individual models by combining several weak learners (Feng et al. 2020; Ahmad et al. 2021a). The ensemble model increases the probability of creating a learner that shows better generalization performance by combining multiple individual models. There are two methods in an ensemble model: boosting and bagging. The boosting method sequentially trains models and reduces errors by assigning weights to misclassified instances. It involves adjusting the sample weights of the next classifier’s training data based on the previous classifier’s learning results. Each classifier performs random sampling with replacement. Representative boosting algorithms include gradient boosting regressor (GBR), extreme gradient boosting regressor (XGBR), and categorical gradient boosting (CatBoost). Bagging, short for bootstrap aggregating, is a parallel-type ensemble method that creates each subset by randomly sampling the data with replacement (bootstrap) and combines the predictions from these individual models (aggregating). Random forest (RF) and extra trees (ET) are example models of bagging. In this study, five commonly employed ensemble models are chosen: GBR, XGBR, CatBoost, RF, and ET, which are explained in the following sections.

2.1 Boosting

2.1.1 Gradient boosting regressor (GBR)

GBR is based on a boosting algorithm and compensates for errors by assigning weights to errors in the previous tree and employs a slope-down method for weight updates. Although its sequential learning process can lead to slower processing, GBR can effectively handle various optimized objective functions.

2.1.2 Extreme gradient boosting regressor (XGBR)

XGBR has been developed to overcome the boosting algorithm’s speed limitations and demonstrate faster processing by supporting parallel processing. It contains a mechanism known to regulate overfitting and shows outstanding predictive capabilities in both classification and regression.

2.1.3 Categorical gradient boosting (CatBoost)

CatBoost can manage categorical variables effectively through ordered boosting and efficient encoding techniques of target, mean, and response encoding. Recently, the model has been widely used because of the relatively low computation time without compromising accuracy and optimized hyperparameters.

2.2 Bagging

2.2.1 Random forest (RF)

RF comprises DT and is capable of resolving overfitting issues occurred in a single decision tree by utilizing multiple trees. Each RF tree generates varied predictions due to randomness, enabling the model to learn from diverse perspectives and improve its predictions, which ultimately enhances its generalization performance. Additionally, by applying randomization to ensemble method bagging, the forest becomes strong when there are noisy data.

2.2.2 Extra trees (ET)

ET, a variation of the RF model, randomly selects features for splitting nodes, while RF typically chooses the optimized split nodes from the given features. In other words, ET incorporates greater randomness than RF at greater executed speed. The model also reduces bias by utilizing the entire original dataset without relying on Bootstrap, a key characteristic of RF.

3. Database Description

3.1 Data collection

To predict the fire resistance performance of concrete using an ML algorithm, data were collected experimentally as well as from the available literature. For the experiments, test variables were designed with different water-to-binder ratio and cement-to-admixture ratio, as listed in Table 1.

Cylindrical specimens with a diameter of 100 mm and a height of 200 mm were manufactured and cured underwater for 28 days at room temperatures. Then, the cured specimens were preheated at a constant temperature of 100 °C inside a heating muffle furnace (Nabertherm, Germany) to prevent spalling by evaporating the moisture inside the specimens. Thereafter, the specimens were heated at 200 °C, 500 °C, or 800 °C for 3 hours to ensure that the target temperature was completely transferred through the specimens for a sufficient duration. The rate of temperature increase was controlled not to be faster than 4.4 °C/min during heating to prevent spalling. After heating, the specimens remained in the heating chamber for 24 hours until the temperature is slowly cooled down. Then, compressive strength and strains were measured using a loading machine and compressometer, respectively. Compressive strength tests were conducted two or three times for each test variable until consistent test results were obtained (Chun et al. 2023). Previous study (Chun et al. 2023) also reported strength changes of the concrete from 1 day to 90 days after the heating, and the heated specimens showed minimal strength decrease until 30 days after the heating. Therefore, the change of strength within 24 hours after the heating could be neglected, considering amount of the strength changes within 30 days after the heating and consistent strength test conditions that were conducted on the specimens placed on the heating chamber for 24 hours after the heating. Figs. 1(a)~(c) are the photographs of manufacturing specimens as well as the heating and loading tests. Fig. 2 indicates that as the temperature rises from 200 °C to 800 °C, the compressive strengths of the concrete decrease significantly. In order to investigate the effect of mix ratios on amount of strength reduction due to heat, residual strength ratios were calculated as strengths of heated specimens divided by those of unheated specimens and illustrated in Figs. 3(a)~(c).

Fig. 3(a) illustrated residual strength ratios of pure cement, cement-fly ash, and cement-fly ash-slag composite concretes at 200 °C, 500 °C and 800 °C. It was interesting to note that the concrete with same amount of fly ash and slag showed the better residual strength ratio than the pure cement and the cement with fly ash concrete when exposed to 500 °C and 800 °C. In contrast, Fig. 3(b) showed that within the fly ash and slag composite concretes, concretes having different mix ratios of fly ash and slag portions showed the larger strength reduction even than the pure cement and the cement with fly ash concrete. Therefore, the experimental results showed that the strength reduction ratios varied depending on mix ratios. However, the relations between fly ash and slag are not clear and the more data need to be analyzed to generalize the effect of mix ratios of admixtures on strength reduction due to heat. When comparing residual strength ratios of normal (NF3S1) and low cement (LF3S1) concretes, the higher residual strength ratios were obtained from the low cement concrete with fly ash and slag composite (LF3S1) at all the tested temperatures as shown in Fig. 3(c).

Even though there is a significant effect of mix proportions such as water-to-binder ratio or cement-to- admixture ratio on strength of heated concrete, it is very hard to predict because of multiple and complex influencing parameters. Given the difficulty in statistically analyzing the trends in strength changes according to the heat exposure and mix proportions of concrete, it is helpful to utilize ML for predicting the compressive strength of fire damaged concrete. Then, ML algorithm needs input data imported from the experimental results. The data were categorized by cement (kg), fly ash (cement- to-fly ash ratio, %), slag (cement-to-slag ratio, %), W/B (water-to-binder ratio, %), temp (heating temperature, °C), strength (compressive strength, MPa), and time (cooling period after heating, d), in order to consider the influencing factors and the expected output (strength). Because of a limited number of the experimental results obtained from our research group, additional data on concrete strength following exposure to high temperatures for different mix proportions were collected from published literature (Yeh 1998; Lee et al. 2002; Kim et al. 2012; Lee et al. 2012; Ahmad et al. 2021b; Song et al. 2021) and added to the ML dataset. Therefore, a total of 738 datasets, collected from experiments and the literature, are used to train and test the ML algorithms.

Fig. 1 Photographs of manufacturing specimens, heating, and loading test

../../Resources/KCI/JKCI.2025.37.1.037/fig1.png

Fig. 2 Compressive strength of the heated specimens

../../Resources/KCI/JKCI.2025.37.1.037/fig2.png

Fig. 3 Comparison of residual strength ratio of the heated specimens

../../Resources/KCI/JKCI.2025.37.1.037/fig3.png

Table 1 Mix proportion of the tested specimens

Mix ID

Cement

(kg/m3)

Water

(kg/m3)

Water-to- binder ratio (%)

Cement-to-

admixture ratio (%)

Fly ash

(kg/m3)

Slag

(kg/m3)

Fine aggregate

(kg/m3)

Coarse aggregate

(kg/m3)

PP fiber

(vol%)

Super-

plasticizer

(kg/m3)

NF0S0

604

157

26

0

0

0

681

921

0.15

6.04

NF4S0

484

157

26

20

120

0

662

895

NF2S2

484

157

26

20

60

60

666

909

NF3S1

484

157

26

20

90

30

663

904

NF1S3

484

157

26

20

30

90

669

913

LF4S0

300

140

31.2

40

100

100

606

1,055

LF2S2

300

140

31.2

40

200

0

617

1,073

LF3S1

300

140

31.2

40

150

50

611

1,063

3.2 Data preprocessing

Machine learning modeling was performed using Scikit- Learn, a Python-based library widely used for machine learning analysis. The dataset was split into training and testing sets, with proportions of 30 % and 70 %, respectively. Additionally, through the data scaling standardization process, the mean of each variable was set to 0, and the variance was set to 1, reducing the impact of scale differences between variables.

4. Model selection in machine learning

4.1 Model evaluation indices

The coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) are utilized as indicators in this study to evaluate the prediction performance of the models and choose the best model. R2 is a variance-based indicator ranging from 0 to 1, and a value closer to 1 indicates a higher prediction accuracy. MAE is the average of the absolute value of the difference between actual and predicted values, whereas RMSE is the square root of the mean squared differences between actual and predicted values. For both MAE and RMSE indices, the lower values indicate higher accuracy. The formulations for obtaining evaluation indices are presented in Eqs. (1)~(3), where N is the number of samples, y is the actual value, $\overline{y}$ is the mean of the actual values, and $\hat{y}$ is the predicted value (Li and Song 2022).

(1)
R2 = $1-\dfrac{\sum_{i=1}^{N}(y_{i}-\hat{y_{i}})^{2}}{\sum_{i=1}^{N}(y_{i}-\overline{y})^{2}}$
(2)
MAE = $\dfrac{1}{N}\sum_{i=1}^{N}\vert y_{i}-\hat{y_{i}}\vert$
(3)
RMSE = $\sqrt{\dfrac{1}{N}\sum_{i=1}^{N}(y_{i}-\hat{y_{i}})^{2}}$

4.2 Model evaluation and selection

After the preprocessing of input data and default parameter tuning through PyCaret, the evaluation indices of R2, RMSE, and MAE values were collected from various ML models. Among many other models, top five ML models having the highest prediction accuracy were found to be GBR, XGBR, CatBoost, RF, and ET as listed in Table 2.

All evaluation indicators: R2, RMSE, and MAE, revealed that the accuracy level of the performance appeared to be the best in the CatBoost model. For example, the value of R2 in the CatBoost model was 0.8356, whereas those from the other four models were 0.8196 (RF), 0.8146 (ET), 0.8097 (XGBR), and 0.8086 (GBR). The result also agreed with the findings from Sapkota et al. (2024), where CatBoost was found to be more suitable for predicting normal concrete compressive strength than RF.

To improve prediction accuracy, hyperparameter tuning was conducted by selecting and implementing optimized parameters to the five ML models. These parameters, therefore, are different from those determined automatically through PyCaret. In all models except RF, the R2, RMSE, and MAE values improved through the hyperparameter tuning, and the CatBoost model demonstrated better accuracy than the other models regardless of the evaluation index types. Table 3 indicates that the R2 and RMSE values for the test dataset were best in the order of CatBoost, XGBR, GBR, ET, and RF, whereas the MAE value was found to be best in order of CatBoost (4.7733), ET (4.9702), GBR (5.0092), XGBR (5.2126), and RF (5.4410). It was interesting to note that, CatBoost had the best performance in all cases, and the difference between the bagging and boosting methods was marginal. Among the bagging models, ET was better than the RF model.

Additionally, a comparative visualization of the actual experimental values and predictions obtained from the five models is illustrated in Figs. 4(a)~(e). As shown in Figs. 4(a) and (b), regarding the CatBoost and ET models, the marks denoting the experimental results (in blue and red) are hardly visible because they are overlapped with the prediction marks (in green and yellow). However, in other models (Figs. 4(c)~(e)), the marks denoting experimental data (in blue and red) and predictions (in green and yellow) are clearly visible, as they are relatively far apart.

Fig. 4 Comparisons between the actual dataset and predicted results of the five models in the training and testing phases

../../Resources/KCI/JKCI.2025.37.1.037/fig4-1.png../../Resources/KCI/JKCI.2025.37.1.037/fig4-2.png

Table 2 Evaluation results of different ML models

Model

Type

R2

RMSE

MAE

CatBoost

Boosting

0.8356

7.7724

5.2003

RF

Bagging

0.8196

8.1560

5.3934

ET

Bagging

0.8146

8.2800

5.2355

XGBR

Boosting

0.8097

8.4026

5.4116

GBR

Boosting

0.8086

8.3997

5.9740

Table 3 Evaluation results of hyperparameter tuning

Model

R2

RMSE

MAE

Train

Test

Train

Test

Train

Test

CatBoost

0.9818

0.8411

2.5911

7.6400

0.6685

4.7733

XGBR

0.9644

0.8337

3.6267

7.8148

2.1447

5.2126

GBR

0.9755

0.8220

3.0065

8.0855

1.6812

5.0092

ET

0.9818

0.8182

2.5911

8.1714

0.6685

4.9702

RF

0.9623

0.8020

3.7330

8.5271

2.2212

5.4410

5. Model performance analysis

5.1 Feature importance

A feature importance analysis was conducted among the five ML models, and the results are illustrated in Fig. 5. All models indicated similar patterns; specifically, the water- to-binder ratio and heating temperature were identified as the most influential factors affecting the strength of heated concrete. Meanwhile, the cooling period had the least influence on strength. Between the admixtures, fly ash exhibited a slightly stronger influence than slag. Interestingly, the CatBoost model identified heating temperature as slightly more important than the water-to-binder ratio, but the difference was small; and the ranking of the remaining features was similar to that in other models.

In previous sensitivity analyses (Ahmad et al. 2021b; Ahmad et al. 2021c), cement was the most influential factor, followed by fly ash. Meanwhile, the effect of heating temperature appeared less substantial than that of other factors. Unlike the aforementioned research, this study recognized the contribution of temperature to the strength of the heated concrete. This may explain why the CatBoost model had a higher prediction accuracy than the other models.

Fig. 5 Feature importance for the five ML models

../../Resources/KCI/JKCI.2025.37.1.037/fig5.png

5.2 Validation of CatBoost and ET models

In this section, a new dataset that had not been used in training or testing was applied to the two ML models, CatBoost and ET models, which had shown the highest accuracy overall and were the best among the bagging models, respectively. This was done not only to validate the prediction accuracy of the models, but also to demonstrate the applicability of these models to wide ranges of concrete mixtures. The new dataset was adopted from the work of Khan et al. (2013). They manufactured cube-shaped concrete specimens using four different mix ratios by varying the ratio of cement to fly ash and water to binder. The compressive strengths of the concrete were measured after being exposed to temperatures ranging from 100 °C to 800 °C with 100 °C intervals.

Out of four test variables of concrete mixes, only three mixes listed in Table 4 were used as input data of the ML models. Mix 2 of Khan et al. (2013) was excluded due to its ratios of cement to binder or water to binder being identical to those of Mix 1, Mix 3, and Mix 4. The experimental results of the compressive strengths were varied as 13 MPa, 39 MPa, and 51 MPa in Mix 1, Mix 3, and Mix 4 at room temperature, respectively.

The compressive strengths predicted from the ML models were converted to cubic strength using Eq. (4) (Tam et al. 2017), because the experimental results were based on cubic strength.

(4)
$cubic \;strength = cylinderical \;strength * 1.25$

Fig. 6 illustrates the results of concrete strength at temperatures of 20 °C, 200 °C, 500 °C, and 800 °C predicted using the CatBoost and ET models (in black), compared to experimental results (in gray). It is found from Fig. 6(a) that the trend of strength changes with temperature and mix ratio is similarly observed in predictions using the CatBoost model. As temperature changed, both experimental results and prediction from CatBoost model showed that the strengths slightly increased at 200 °C but decreased significantly at 500 °C and 800 °C. Furthermore, the strength differences according to the mix ratios were also predicted similarly to the experimental values when using the CatBoost model. At 20 °C and 200 °C, the compressive strength was highest for Mix 4 (circle mark), which has a low water-to-binder ratio, followed by Mix 3 (square mark) with a low admixture substitution rate, and then Mix 1 (triangle mark). Notably, as the temperature increased, Mix 4 exhibited a greater reduction in strength compared to Mix 3, resulting in Mix 4 having lower compressive strength than Mix 3 at 800 °C. This reversal in strength order at 800 °C was also accurately predicted by the CatBoost model.

However, the bagging method, ET, predicted only minimal changes in strength at temperatures of 20~500 °C, while experimental results and the CatBoost demonstrated relatively large strength changes due to heat exposure. Only Mix 4 demonstrated strength predictions that closely matched the experimental results of the ET model, which may be due to its water-to-binder ratio being similar to that of the trained data in the ML model. Therefore, it was consistently revealed that CatBoost was the most suitable method for predicting the strength of heated concrete. The validated conditions were across normal and high- strength concrete using cement alone, with fly ash or slag as admixtures and across heating temperatures of 200 °C, 500 °C, and 800 °C.

Furthermore, there is a limitation such that the strength predicted from the ML models were compared to the ones obtained from cubic specimens, while the developed ML models targeted to predict strength of cylinderical shaped specimens. Even though the strengths predicted from the ML models were converted to the strength of cubic specimens by Eq. (4), the converted strength were approximated which might results inaccurate prediction. In addition, it was found that the ML model predicted strength changes of concrete at high temperatures more reasonably, rather than the strength at room temperature. This might be because trained data had wide ranges and types of parameters but amount of the trained data was less than needed.

Fig. 6 Experimental and predicted compressive strength of concrete exposed to high temperature

../../Resources/KCI/JKCI.2025.37.1.037/fig6.png

Table 4 New dataset of mix proportions, adopted from Khan et al. (2013)

Mix ID

Cement

(kg/m3)

Fly ash

(kg/m3)

Water-to-binder ratio

Mix 1

136

204

0.45

Mix 3

204

136

0.45

Mix 4

198

198

0.35

6. Conclusion

This study utilized five ensemble machine learning algorithms-ET, RF, GBR, XGBR, and CatBoost, known as ensemble models-to predict the compressive strength of concrete subjected to high temperatures, considering the effects of SCM content. The most suitable ensemble model was proposed by assessing and comparing predictive accuracies. In addition, new input data were applied to the two most accurately predictive ML models, CatBoost and ET, to validate the models’ performance. As a result, the CatBoost model was found to be the most suitable for predictions, and it is expected that, with further development through an expanded database, it could be utilized as a method for evaluating the fire resistance performance of concrete in the future. The following are the detailed findings of this study.

ET, RF, GBR, XGBR, and CatBoost models exhibited reasonable prediction performance; their R2 value ranged from 0.8 to 0.9, with CatBoost standing out for its exceptional performance. CatBoost demonstrated higher accuracy than the other models across all indicators including R2, RMSE, and MAE. Minimal differences were observed between bagging and boosting methods, but regarding the bagging method, ET outperformed the RF model.

The evaluation of the key factor influencing compressive strength of fire damaged concrete highlighted the differences between CatBoost and the other four models. In CatBoost, heating temperature was identified as the most influential factor, followed by the water-to-binder ratio. Conversely, in the other four models, the order was reversed, indicating a higher priority for the water-to- binder ratio over the heating temperature.

When predicting strength from a new input that had not been used for training or testing, CatBoost demonstrated a better prediction accuracy than the ET model. The CatBoost model’s prediction of strength changes with heating temperatures and mix ratios well captured experimental results, which showed significant strength increase from 20 °C to 200 °C and decrease from 200 °C to 800 °C.

Funding

This research was supported by the Basic Science Research Program funded by the National Research Foundation of Korea (NRF) (NRF-2021R1F1A1051300).

Acknowledgement

The authors are grateful for the help of Chae-eun Lee and Sun-young Cha, undergraduate students in the department of architectural and urban systems engineering, Ewha Womans University, in conducting experiments. We are also grateful for the help of Jae-sun Kwon, an undergraduate student in the department of architectural and urban systems engineering, Ewha Womans University, in initializing machine learning codes.

References

1 
Ahmad, A., Farooq, F., Niewiadomski, P., Ostrowski, K., Akbar, A., Aslam, F., and Alyousef, R. (2021a) Prediction of Compressive Strength of Fly Ash Based Concrete Using Individual and Ensemble Algorithm. Materials 14(4), 794.DOI
2 
Ahmad, A., Ostrowski, K. A., Maślak, M., Farooq, F., Mehmood, I., and Nafees, A. (2021b) Comparative Study of Supervised Machine Learning Algorithms for Predicting the Compressive Strength of Concrete at High Temperature. Materials 14(15), 4222.DOI
3 
Ahmad, M., Hu, J. L., Ahmad, F., Tang, X. W., Amjad, M., Iqbal, M. J., Asim, M., and Farooq, A. (2021c) Supervised Learning Methods for Modeling Concrete Compressive Strength Prediction at High Temperature. Materials 14(8), 1983.DOI
4 
Chun, Y., Kwon, J., Kim, J., Son, H., Heo, S., Cho, S., and Kim, H. (2023) Experimental Investigation of the Strength of Fire-Damaged Concrete Depending on Admixture Contents. Construction and Building Materials 378, 131143.DOI
5 
Farooq, F., Nasir Amin, M., Khan, K., Rehan Sadiq, M., Javed, M. F., Aslam, F., and Alyousef, R. (2020) A Comparative Study of Random Forest and Genetic Engineering Programming for the Prediction of Compressive Strength of High Strength Concrete (HSC). Applied Sciences 10(20), 7330.DOI
6 
Feng, D. C., Liu, Z. T., Wang, X. D., Chen, Y., Chang, J. Q., Wei, D. F., and Jiang, Z. M. (2020) Machine Learning-Based Compressive Strength Prediction for Concrete: An Adaptive Boosting Approach. Construction and Building Materials 230, 117000.DOI
7 
Khan, M. S., Prasad, J., and Abbas, H. (2013) Effect of High Temperature on High-Volume Fly Ash Concrete. Arabian Journal for Science and Engineering 38, 1369-1378.DOI
8 
Kim, J. B., Shin, K. S., and Park, K. B. (2012) Mechanical Properties of Ultra High Strength Concrete Using Ternary Blended Cement. Journal of the Korea Institute for Structural Maintenance and Inspection 16(6), 56-62. (In Korean)DOI
9 
Lee, B. S., Jun, M. H., and Lee, D. H. (2012) The Effect of Mixing Ratio of Blast Furnace Slag and Fly Ash on Material Properties of 80MPa High Strength Concrete with Ternary Cement. LHI Journal of Land, Housing, and Urban Affairs 3(3), 287-297. (In Korean)DOI
10 
Lee, D. H., Seo, D. H., Jun, P. H., Paik, M. S., Lim, N. G., and Jung, S. J. (2002) The Experimental Study on High Strength Concrete of High Volume Fly-Ash. KCI 2002 fall Conference. Korea Concrete Institute (KCI), 14(2), 275-280. (In Korean)URL
11 
Li, Q. F., and Song, Z. M. (2022) High-Performance Concrete Strength Prediction Based on Ensemble Learning. Construction and Building Materials 324, 126694.DOI
12 
Ma, Q., Guo, R., Zhao, Z., Lin, Z., and He, K. (2015) Mechanical Properties of Concrete at High Temperature-A review. Construction and Building Materials 93, 371-383.DOI
13 
Ramzi, S., and Hajiloo, H. (2023) The Effects of Supplementary Cementitious Materials (SCMs) on the Residual Mechanical Properties of Concrete after Exposure to High Temperatures. Buildings 13(1), 103.DOI
14 
Rathakrishnan, V., Bt. Beddu, S., and Ahmed, A. N. (2022) Predicting Compressive Strength of High-Performance Concrete with High Volume Ground Granulated Blast-Furnace Slag Replacement Using Boosting Machine Learning Algorithms. Scientific Reports 12(1), 9539.DOI
15 
Sapkota, S. C., Saha, P., Das, S., and Meesaraganda, L. P. (2024) Prediction of the Compressive Strength of Normal Concrete Using Ensemble Machine Learning Approach. Asian Journal of Civil Engineering 25(1), 583-596.DOI
16 
Song, H., Ahmad, A., Farooq, F., Ostrowski, K. A., Maślak, M., Czarnecki, S., and Aslam, F. (2021) Predicting the Compressive Strength of Concrete with Fly Ash Admixture Using Machine Learning Algorithms. Construction and Building Materials 308, 125021.DOI
17 
Tam, C. T., Babu, D. S., and Li, W. (2017) EN 206 Conformity Testing for Concrete Strength in Compression. Procedia Engineering 171, 227-237.DOI
18 
Vo, T. C., Nguyen, T. Q., and Tran, V. L. (2024) Predicting and Optimizing the Concrete Compressive Strength Using An Explainable Boosting Machine Learning Model. Asian Journal of Civil Engineering 25(2), 1365-1383.DOI
19 
Yeh, I. C. (1998) Modeling of Strength of High-Performance Concrete Using Artificial Neural Networks. Cement and Concrete Research 28(12), 1797-1808.DOI