Main Steps for POD

The main steps for the POD are as follows:

ToDo: Find a way to import from files which start with a number.

  1. Generate the microstructures

This script generates the microstructures for the training and testing datasets. The microstructures are generated using the splinepy library. The microstructures are saved in the Data folder. The microstructures are generated using the HollowOctagon microtile. The microstructures are generated using random sizes. The microstructures are generated using the following sizes:

  • Training dataset: [0.04, 0.12, 0.20, 0.28, 0.36]

  • Testing dataset: [0.08, 0.16, 0.24, 0.32]

The microstructures are generated using the following setup:

  • EPS: 1e-8

  • BOX_LENGTH: 0.14

  • BOX_HEIGHT: 0.041

  • SHOW_MICROSTRUCTURE: False

  • FILENAME: “microstructure.xml”

  • N_THREADS: 1

  • TILING: [6, 3]

  • N_REFINEMENTS: 0

  • DEGREE_ELEVATIONS: 1

  • H_REFINEMENTS: 0

  • INLET_BOUNDARY_ID: 2

  • OUTLET_BOUNDARY_ID: 3

  • INLET_PEAK_VELOCITY: 1.4336534897721067

  • CLOSING_FACE: “x”

  • OBJECTIVE_FUNCTION: [2, 3]

  • OBJECTIVE_FUNCTION_WEIGHTS: [7e5, 2e-3]

  • MICROTILE: HollowOctagon

  • N_SIZES_TRAIN: 125

  • N_SIZES_TEST: 64

The setup is saved in the Data folder.

  1. Rum the CFD simulation

This script runs the Stokes problem for all the microstructures in the Data folder. The script runs the Stokes problem using the gismo library. The script runs the Stokes problem using the stokes_example executable. The script runs the Stokes problem using the following setup:

  • DEGREE_ELEVATIONS: 1

  • H_REFINEMENTS: 0

The script saves the output in the Data folder. The script saves the output in the following folders:

  • paraview

  • velocities

  • pressure

The script saves the output in the following files:

  • paraview: index_{index}

  • velocities: velocity_field_{index}

  • pressure: pressure_field_{index}

  1. Some data analysis

This script reads the data from the xml files and saves them as csv files. The script reads the data from the Data folder. The script reads the data from the following folders:

  • train

  • test

The script reads the data from the following files:

  • parameter_input.xlsx

The script reads the data from the following files:

  • velocities: velocity_field_{index}.xml

  • pressure: pressure_field_{index}.xml

The script saves the data in the following folders:

  • matrices

The script saves the data in the following files:

  • velocity.csv

  • pressure.csv

  • velocity_scaled.csv

  • pressure_scaled.csv

The script scales the data using the StandardScaler from sklearn. The script saves the scaler in the following files:

  • scaler_velocity.pkl

  • scaler_pressure.pkl

  1. Train the reduced order models

Run either d_TrainAllModels.py to train all models in one bunch or run the following scripts to train the models individually:

  • d1_TrainModels_LR.py for Linear Regression

  • d2_TrainModels_GP.py for Gaussian Process Regression

  • d3_TrainModels_RBF.py for Radial Basis Function Regression

  1. Create patches for error evaluation in gismo

 

Create patches for error evaluation in gismo. Only necessary if the transformation back to gismo is needed.

  1. Evaluate the error

 

Evaluate the error of the reduced order models.

  1. Get timings

 

Get timings for the reduced order models.