Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test pydantic #178

Draft
wants to merge 90 commits into
base: develop
Choose a base branch
from
Draft
Changes from all commits
Commits
Show all changes
90 commits
Select commit Hold shift + click to select a range
01b9143
wip first commit of analysis helpers
cchall Dec 10, 2024
e9280c5
wip
cchall Dec 10, 2024
fed0e39
wip added Results class
cchall Dec 10, 2024
33682f2
wip symlinks replaced with file contents
cchall Dec 11, 2024
fc570a3
wip small updates
cchall Dec 11, 2024
f374d6e
wip pseudo code start of pydantic parameters
cchall Dec 11, 2024
12ca99d
wip adding abc for code
cchall Jan 21, 2025
576adb7
wip Parameter is usable now
cchall Jan 21, 2025
8bb2b0a
wip better handling for defaults in Codes and adding Setting
cchall Jan 21, 2025
3267c43
wip abc for setup
cchall Jan 21, 2025
c586556
wip
cchall Jan 22, 2025
866737a
wip
cchall Jan 23, 2025
9610119
wip
cchall Jan 24, 2025
9093b48
wip
cchall Jan 24, 2025
037aa78
wip continuing testing of options implementations
cchall Jan 28, 2025
41b5cd9
wip basic options implemented for testing
cchall Jan 28, 2025
28258d1
wip fixing up validation of options
cchall Jan 28, 2025
7dec7f2
wip splitting apart scan and optimize configurations
cchall Jan 29, 2025
d182c87
wip creating mesh_scan option
cchall Jan 29, 2025
56d0774
wip Updates to prepare for integrating parser to libEnsemble setup
cchall Jan 30, 2025
da4afc4
wip starting to interface new parsing and data structures to libEnsem…
cchall Jan 30, 2025
e6fb60e
wip initial hookup working - dfols example runs
cchall Jan 30, 2025
ce4a7d4
wip by-code symlink targets are empty for more cases than not
cchall Jan 30, 2025
fec9d9e
wip add an implementation for user code type
cchall Jan 31, 2025
da7a5eb
wip make options selection into enum
cchall Jan 31, 2025
6ff9fb0
wip Add executor check to configuration model
cchall Jan 31, 2025
9624d59
wip make mesh options software_options
cchall Jan 31, 2025
9f5cf2c
wip changes and fixes to add executor support and get elegant example…
cchall Jan 31, 2025
842204a
wip elegant matching example running
cchall Jan 31, 2025
8f43766
closes #145
cchall Jan 31, 2025
1da1ba3
closes #172
cchall Jan 31, 2025
e4cfb09
wip adjust how exit_criteria is set to get sampler types working
cchall Feb 1, 2025
856352b
wip add opal support
cchall Feb 1, 2025
c08b3cc
wip subclassing str allows a subset of the enum names to be specified…
cchall Feb 1, 2025
4f5f7c6
wip adding madx
cchall Feb 1, 2025
5632ddc
wip more fixes for running sampler jobs
cchall Feb 1, 2025
3f30db6
wip support for flash
cchall Feb 1, 2025
78c5bc9
wip checking dict.get is a bad idea if the value can be 0
cchall Feb 1, 2025
cb90865
wip adding first pass at pydantic models for genesis
cchall Feb 1, 2025
8fe8907
wip moved parse_simulation_input_file into the new parsers module. Th…
cchall Feb 2, 2025
acec4e7
wip lark/pydantic parsing to model works on the genesis input from ge…
cchall Feb 2, 2025
7a8a614
wip grammars for elegant cmd and lattice, spiffe cmd
cchall Feb 3, 2025
38a4090
remove unused testing files
cchall Feb 3, 2025
850a9a1
wip - starting to insert pieces of parser to write code. Prepare to r…
cchall Feb 10, 2025
fe17cb0
Move each Code setup module out of its sub-directory and into rsopt.c…
cchall Feb 10, 2025
e3dfe4c
move serial python execution functions out of codes and into libe_too…
cchall Feb 10, 2025
f9f9912
move model creation into model package
cchall Feb 10, 2025
0c73071
move model creation into model package
cchall Feb 10, 2025
0dbf961
validates re-structuring up through the process of creating a dynamic…
cchall Feb 10, 2025
95f2391
updated the base command model structure so that commands list and ea…
cchall Feb 10, 2025
0a2808a
moves a few files from flash and genesis and updates pathing to compl…
cchall Feb 10, 2025
21aedbd
adds a basic write model functionality under new code organization
cchall Feb 10, 2025
1fedcd0
adds a basic write model functionality under new code organization
cchall Feb 10, 2025
a68b053
Revert "adds a basic write model functionality under new code organiz…
cchall Feb 10, 2025
5da3c3e
add back in writer __init__ that was removed in commit revert 21aedbd…
cchall Feb 10, 2025
405e87f
Give base parameter and setting fields to explicitly set attribute an…
cchall Feb 13, 2025
22d1ae8
Simplifies command writing by using model_dump. Also allows for use o…
cchall Feb 13, 2025
0285bcd
Moves organizing and writing the kwarg dict a job for each Code objec…
cchall Feb 13, 2025
56d3bce
Implements the model editing for spiffe and updates elegant to use pa…
cchall Feb 13, 2025
5e2bf90
parsing function for spiffe. needs to be defined separate from sirepo…
cchall Feb 13, 2025
13b2963
update the spiffe example now that full rsopt functionality is define…
cchall Feb 13, 2025
290b40d
Add the basics of the Genesis parsing and writing following the templ…
cchall Feb 13, 2025
32bcfd0
Small cleanup for spiffe files
cchall Feb 13, 2025
b7ceee6
Noting behavior when #182 occurs
cchall Feb 13, 2025
b96242c
Fixups to get genesis fully working. Mostly just properly handling ty…
cchall Feb 13, 2025
778cc35
Clean up genesis model validation and writing. Adds a formatting func…
cchall Feb 14, 2025
fcdab74
Fix method discriminator so that user can just pass the name to method
cchall Feb 14, 2025
3941bac
Change SUPPORTED_OPTION enum structure to not use aliasing so the mod…
cchall Feb 14, 2025
ab05b93
Add dlib updated dlib option class first pass. Remove unused code fro…
cchall Feb 14, 2025
13d2d4c
Add updated pysot options class and optimizer libEnsemble interface c…
cchall Feb 14, 2025
709747e
Update dlib interface getters and set typing for base model to allow …
cchall Feb 14, 2025
7d898f9
Updated latin hypercube interface and corresponding libEnsemble hooks…
cchall Feb 14, 2025
11af032
Add an argument_passing mode to select kwargs or a single array to pa…
cchall Feb 15, 2025
8aaad25
Add a vectorized parameter to simplify adding repeated parameters tha…
cchall Feb 15, 2025
841708f
fixed bug in pysot option
cchall Feb 15, 2025
ee20475
Allow for not having static outputs. SimSpecs is set internally and s…
cchall Feb 15, 2025
34d7d75
Set up nsga2 option and re-add the deap-nsga2 generator
cchall Feb 15, 2025
eab7fe4
Fixes to get nsga2 generator working. Importantly adds in the persis_…
cchall Feb 16, 2025
715f8dd
Fix dimension property getter name
cchall Feb 16, 2025
1c9f56f
Simplify the checks for executor and make sure that we are creating r…
cchall Feb 16, 2025
6ca4e26
Update parameters of zdt4 example for improved result
cchall Feb 16, 2025
9d01e56
Updates SciPy optimizer support. Provides the template, going forward…
cchall Feb 18, 2025
0b34b01
Splits the libEnsemble setup class off from the base local optimizer …
cchall Feb 18, 2025
1b4f687
Remove completed todo
cchall Feb 18, 2025
110804d
Updating nlopt interface
cchall Feb 18, 2025
c510309
Add scipy to post-run result printout group
cchall Feb 18, 2025
a42fe85
First pass at adding updated xopt mobo generator. Basic implementatio…
cchall Feb 19, 2025
4f46b14
Moves software_options validator to ABC options class so that it won'…
cchall Mar 10, 2025
32ebaa8
wip - intermediate stage of updating aposmm option
cchall Mar 10, 2025
d1cf019
Merge remote-tracking branch 'origin/test_pydantic' into test_pydantic
cchall Mar 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/options.rst
Original file line number Diff line number Diff line change
@@ -28,7 +28,7 @@ Global Options
Stop when this many new points have been evaluated by sim_f
'gen_max' [int]:
Stop when this many new points have been generated by gen_f
'elapsed_wallclock_time' [float]:
'wallclock_max' [float]:
Stop when this time (since the manager has been initialized) has elapsed
'stop_val' [(str, float)]:
Stop when H[str] < float for the given (str, float pair)
44 changes: 43 additions & 1 deletion examples/elegant_matching_parallel_execution_example/elegant.ele
13 changes: 6 additions & 7 deletions examples/mobo_example/config_mobo.yml
Original file line number Diff line number Diff line change
@@ -14,18 +14,17 @@ codes:
function: tnk
execution_type: serial
options:
nworkers: 4
nworkers: 2
software: mobo
objectives: 2
constraints: 2
reference: [1.4, 1.4]
software_options:
fixed_cost: True
num_of_objectives: 2
min_calc_to_remodel: 3 # if min_calc_to_remodel == nworkers - 1 then model update is synchronous
use_gpu: False
constraints:
c1: ['GREATER_THAN', 0]
c2: ['LESS_THAN', 0.5]
reference_point:
x1: 1.4
x2: 1.4
exit_criteria:
sim_max: 80
sim_max: 22
output_file: mobo_example_results
26 changes: 26 additions & 0 deletions examples/problems/zdt4/config_nsga2.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
codes:
- python:
parameters:
x0:
min: 0
max: 1
start: 0
x_n:
dimension: 9
min: -10
max: 10
start: 0
setup:
input_file: zdt4.py
function: obj_zdt4
argument_passing: args
execution_type: serial
options:
software: nsga2
nworkers: 10
software_options:
pop_size: 100
n_objectives: 2
eta: 1
exit_criteria:
sim_max: 300
19 changes: 19 additions & 0 deletions examples/problems/zdt4/config_scan.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
codes:
- python:
parameters:
x1:
min: 0
max: 1
start: 0
x_n:
dimension: 9
min: -10
max: 10
start: 0
setup:
input_file: zdt4.py
function: obj_zd4
argument_passing: args
execution_type: serial
options:
software: mesh_scan
27 changes: 26 additions & 1 deletion examples/python_aposmm_example/six_hump_camel.py
16 changes: 8 additions & 8 deletions examples/python_chwirut_example/config_chwirut.yaml
Original file line number Diff line number Diff line change
@@ -20,14 +20,14 @@ codes:
options:
nworkers: 2
software: dfols
method: dfols
components: 214
# method: dfols
software_options:
dfols_kwargs:
do_logging: False
rhoend: 1e-5
user_params:
'model.abs_tol': 1e-10
'model.rel_tol': 1e-4
components: 214
dfols_kwargs:
do_logging: False
rhoend: 1e-5
user_params:
'model.abs_tol': 1e-10
'model.rel_tol': 1e-4
exit_criteria:
sim_max: 400
25 changes: 25 additions & 0 deletions examples/spiffe/check_spiffe_parse.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
from rsopt.codes.parsers import spiffe_parser
from rsopt.codes.models import spiffe_model, base_model
from rsopt.codes.writers import write

parser = spiffe_parser.parser
with open('gun.spiffe', 'r') as f:
spiffe_file = f.read()

spiffe_input = parser.parse(spiffe_file)
test_model = base_model.generate_model(spiffe_model.SPIFFE, 'spiffe')

spiffe_model_instance = test_model.model_validate(
spiffe_parser.Transformer().transform(spiffe_input)
)
print(spiffe_model_instance)

spiffe_model_instance.edit_command(command_name='define_geometry',
parameter_name='nz',
parameter_value=264,
command_index=0
)

print('updated spiffe model', spiffe_model_instance)

print(write.write_model(spiffe_model_instance, write.Spiffe))
13 changes: 13 additions & 0 deletions examples/spiffe/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
codes:
- spiffe:
settings:
define_geometry.zmin: -0.001
define_geometry.nz: 515
parameters:
setup:
input_file: gun.spiffe
execution_type: serial
options:
software: mesh_scan
sym_links:
- dc.geo
15 changes: 15 additions & 0 deletions examples/spiffe/dc.geo
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
! DC gun example
&po x=0.000, y=0.000, potential=0 &end
&po x=0.000, y=0.01, potential=0 &end
&po x=0.007, y=0.02, potential=0 &end
&po x=0.009, y=0.02, potential=0 &end
&po x0=0.009, y0=0.028, nt=2, r=0.008, potential=0, theta=90 &end
&po x=0.000, y=0.036, potential=0 &end
&po x=0.000, y=0.036, potential=0 &end
&po x=0.000, y=0.040, potential=0 &end
&po x=0.040, y=0.040, potential=1e6 &end
&po x=0.040, y=0.011, potential=1e6 &end
&po x=0.041, y=0.010, potential=1e6 &end
&po x=0.070, y=0.010, potential=1e6 &end
&po x=0.070, y=0.000, potential=1e6 &end
&po x=0.000, y=0.00 &end
78 changes: 78 additions & 0 deletions examples/spiffe/gun.spiffe
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
&define_geometry
command_name = define_geometry,
nz = 513,
nr = 263,
zmin = 0.0,
zmax = 0.072,
rmax = 0.042,
boundary = "dc.geo",
boundary_output = "dc.bnd",
discrete_boundary_output = "dc.dbnd",
interior_points = "dc.pts",
radial_interpolation = 1,
longitudinal_interpolation = 1,
&end

&define_cathode
command_name = define_cathode,
z_position = 1e-06,
outer_radius = 0.003,
current_density = 1000000.0,
number_per_step = 8.0,
start_time = 0.0,
stop_time = 2e-10,
initial_pz = 1e-06,
distribution_correction_interval = 1,
&end

&define_screen
command_name = define_screen,
filename = "dc.sc1",
z_position = 0.02,
direction = forward,
&end

&define_screen
command_name = define_screen,
filename = "dc.sc2",
z_position = 0.04,
direction = forward,
&end

&define_snapshots
command_name = define_snapshots,
filename = "dc.snap",
time_interval = 4.375e-11,
start_time = 4.375e-11,
&end

&define_field_output
command_name = define_field_output,
filename = dc.fld,
time_interval = 4.375e-11,
start_time = 4.375e-11,
z_interval = 1,
r_interval = 1,
&end

&poisson_correction
command_name = poisson_correction,
step_interval = 32,
accuracy = 0.0001,
error_charge_threshold = 1e-15,
verbosity = 1,
&end

&integrate
command_name = integrate,
dt_integration = 8.544921875e-14,
start_time = 0.0,
finish_time = 7e-10,
status_interval = 256,
space_charge = 1,
check_divergence = 1,
smoothing_parameter = 0.0,
J_filter_multiplier = 0.0,
terminate_on_total_loss = 1,
&end

122 changes: 122 additions & 0 deletions rsopt/analysis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
import pathlib
import sortedcontainers
from ruamel.yaml import YAML
import numpy as np
import pandas as pd
import pydantic

_SIM_PATH_ZEROS = 4

class BaseResult(pydantic.BaseModel, extra='allow'):
sim_id: int
sim_worker: int
sim_ended: bool
sim_started: bool
base_path: pathlib.Path

@property
def path(self) -> pathlib.Path:
return self.base_path.joinpath(f"worker{self.sim_worker}/sim{self.sim_id:0{_SIM_PATH_ZEROS}}")


class Results:
# TODO: Could try to load preprocess and set attributes from them
def __init__(self, results, parameters):
# Create an index for each attribute
self._indexed_parameters = {param: sortedcontainers.SortedDict() for param in parameters}
self.results = results

# Populate the indexes
for obj in results:
for param in parameters:
value = getattr(obj, param)
if value not in self._indexed_parameters[param]:
self._indexed_parameters[param][value] = []
self._indexed_parameters[param][value].append(obj)

def __iter__(self):
return iter(self.results)

def range_query(self, parameter: str, low, high):
index = self._indexed_parameters[parameter]
keys_in_range = index.irange(low, high)
result = []
for key in keys_in_range:
result.extend(index[key])
return result


def history_to_dict(H: np.ndarray, x_names=None) -> dict:
data = {}
for name in H.dtype.names:
if name == 'x':
for i, row in enumerate(H[name].T):
rn = x_names[i] if x_names is not None else f'x{i}'
data[rn] = row
continue
data[name] = H[name]

return data

def gather_config_params(config: dict) -> list[str]:
parameters = []
for code in config['codes']:
code_type = [k for k in code.keys()][0] # should be len==1
if not code[code_type].get('parameters'):
continue
parameters.extend(code[code_type].get('parameters').keys())

return parameters

def create_model(config: dict) -> pydantic.BaseModel:
x_keys = gather_config_params(config)
Result = pydantic.create_model(
'Result',
**{x: (float, ...) for x in x_keys},
__base__=BaseResult
)

return Result


def load_results(directory: str,
config_name: str or None = None,
history_name: str or None = None) -> list[BaseResult]:
config = YAML(typ='safe').load(
pathlib.Path(config_name) if config_name is not None else [f for f in directory.glob('*.yml')][0]
)
history = np.load(history_name if history_name is not None else [f for f in directory.glob('*.npy')][0])

x_names = gather_config_params(config)
model = create_model(config)

results = []

for h in history:
results.append(
model(**history_to_dict(h, x_names=x_names), base_path=directory)
)

return results






if __name__ == '__main__':
test_H = '../examples/python_chwirut_example/H_run_scan_history_length-500_evals-500_workers-1.npy'
H = np.load(test_H)

test_config = pathlib.Path('../examples/python_chwirut_example/config_chwirut.yaml')
config = YAML(typ='safe').load(test_config)

res = gather_config_params(config)

print(
create_model(config)(
**history_to_dict(H[0], x_names=res),
base_path='ab'
)
)

8 changes: 7 additions & 1 deletion rsopt/codes/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
import pydantic
import typing
from rsopt.codes import spiffe, elegant, python, opal, madx, flash, genesis
# Templated codes have schema files that can be used to check input and create run files. Otherwise user
# must supply module containing inputs
TEMPLATED_CODES = ['elegant', 'opal', 'madx', 'flash']
@@ -7,4 +10,7 @@

# Supported codes have defined Job class
# FUTURE: 'Unsupported' codes could become a class of supported codes that have expanded user input required to run
SUPPORTED_CODES = ['python', 'user', 'genesis', *TEMPLATED_CODES]
# SUPPORTED_CODES = ['python', 'user', 'genesis', *TEMPLATED_CODES]

SUPPORTED_CODES = typing.Union[elegant.Elegant, flash.Flash, genesis.Genesis,
madx.Madx, opal.Opal, python.Python, spiffe.Spiffe]
126 changes: 126 additions & 0 deletions rsopt/codes/elegant.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
import logging
import typing
from copy import deepcopy
from rsopt.configuration.schemas import code
from rsopt.configuration.schemas import setup as setup_schema
from rsopt.configuration.setup import IGNORED_FIELDS

LOG = logging.getLogger('libensemble')


def _get_model_fields(model):
commands = {}
command_types = []
elements = {}
for i, c in enumerate(model.models.commands):
if c['_type'] not in command_types:
command_types.append(c['_type'])
commands[c['_type'].lower()] = [i]
else:
commands[c['_type'].lower()].append(i)
for i, e in enumerate(model.models.elements):
elements[e['name'].upper()] = [i]

return commands, elements

class Setup(setup_schema.Setup):
pass

class Elegant(code.Code):
code: typing.Literal['elegant'] = 'elegant'
setup: Setup

@classmethod
def serial_run_command(cls) -> str or None:
return 'elegant'

@classmethod
def parallel_run_command(cls) -> str or None:
return 'Pelegant'

@property
def use_executor(self) -> bool:
return True

def generate_input_file(self, kwarg_dict, directory, is_parallel):
model = self._edit_input_file_schema(kwarg_dict)

model.write_files(directory)

@classmethod
def _get_ignored_fields(cls) -> dict:
return IGNORED_FIELDS.get(cls.code, {})

def _edit_input_file_schema(self, kwarg_dict: dict):
# Name cases in the Sirepo model:
# eLeMENt NAmeS
# ELEMENT TYPES
# element parameters
# command _type
# coMmaNd PaRaMetErs

# While exact element name case is kept at model read all elements are written to upper case. I think elegant
# doesn't distinguish case anyway. For the element parser we'll assume element names are unique regardless of
# case.
commands, elements = _get_model_fields(self.input_file_model) # modifies element name case to UPPER
model = deepcopy(self.input_file_model)

for n, v in kwarg_dict.items():
item_model = self.get_parameter_or_setting(n)
field, index, name = item_model.item_name, item_model.item_index, item_model.item_attribute

# If this is a command
if field.lower() in commands.keys():
# Make sure that if it is a repeated command we know which one to edit
assert index or len(commands[field.lower()]) == 1, \
"{} is not unique in {}. Please add identifier".format(n, self.setup.input_file)
if index:
assert int(index) <= len(commands[field.lower()]), f"Cannot assign to instance {index} of command '{field}'. There are only {len(commands[field.lower()])} instances."
fid = commands[field.lower()][int(index) - 1 if index else 0]
# Handle commands in a case-insensitive fashion
if name.lower() in map(str.lower, model.models.commands[fid].keys()): # Command fields are case-sensitive in schema so we standardize to lower
# If we need to edit the command now we need to match case to access dict
for case_name in model.models.commands[fid].keys():
if case_name.lower() == name.lower():
model.models.commands[fid][case_name] = v
break
else:
command_type = model.models.commands[fid]["_type"]
available_fields = '\nRecognized fields are:\n ' + '\n '.join(
sorted((k for k in model.models.commands[fid].keys() if
not k.startswith('_') and k != 'isDisabled'))
)
raise NameError(f"Field: '{name}' is not found for command {command_type}" + available_fields)
# The name is not recognized for this command report an error unless it is a sirepo ignored field
# then warn the user nothing will happen
else:
command_type = model.models.commands[fid]["_type"]
if name.lower() in self._get_ignored_fields():
LOG.warning("Trying to edit protected field `{name}` is not permitted.".format(name=name))
continue
available_fields = '\nRecognized fields are:\n ' + '\n '.join(
sorted((k for k in model.models.commands[fid].keys() if not k.startswith('_') and k != 'isDisabled'))
)
raise NameError(f"Field: '{name}' is not found for command {command_type}" + available_fields)
# Assume that if the field was not a command it is an element
elif field.upper() in elements: # Sirepo maintains element name case so we standardize to upper here
fid = elements[field.upper()][0]
# Edit the element parameter - it is implied that all parameters are not case-sensitive
if model.models.elements[fid].get(name.lower()) is not None:
model.models.elements[fid][name.lower()] = v
# The element does not have the requested field, report an error to the user
else:
ele_type = model.models.elements[fid]["type"]
ele_name = model.models.elements[fid]["name"]
available_parameters = '\nRecognized parameters are:\n ' + '\n '.join(
sorted((k for k in model.models.elements[fid].keys() if
not k.startswith('_') and k != 'isDisabled'))
)
raise NameError(f"Parameter: {name} is not found for element {ele_name} with type {ele_type}" +
available_parameters)
# The field was not a command or element we cannot handle it
else:
raise ValueError("{} was not found in the {} lattice or commands loaded from {}".format(field, self.code,
self.setup.input_file))

return model
74 changes: 74 additions & 0 deletions rsopt/codes/flash.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
import copy
import os
import pydantic
import typing
from functools import cached_property
from rsopt.codes.parsers.flash_parser import parse_file
from rsopt.configuration.schemas import code
from rsopt.configuration.schemas import setup as setup_schema


def _write_file(par_dict: dict, new_par_path: str) -> None:
"""Write a dictionary of parameters/values to a `flash.par` file."""

text = []

for key, val in par_dict.items():
text.append("{} = {} \n".format(key.strip(), val))

with open(new_par_path, 'w') as ff:
ff.write(''.join(text))

class _Model:
"""Simple class to emulate what we need from a sirepo model"""
def __init__(self, kwarg_dict):
self.kwarg_dict = kwarg_dict

def write_files(self, directory: str) -> None:
return _write_file(self.kwarg_dict, directory)

class Setup(setup_schema.Setup):
executable: pydantic.FilePath

@pydantic.field_validator('executable', mode='after')
@classmethod
def check_executable(cls, v: str):
assert os.access(v, os.X_OK), f"FLASH executable {v} does not have execution permission"
return v


class Flash(code.Code):
code: typing.Literal['flash'] = 'flash'
setup: Setup

def serial_run_command(self) -> str or None:
return './' + str(self.setup.executable)

def parallel_run_command(self) -> str or None:
return './' + str(self.setup.executable)

def generate_input_file(self, kwarg_dict: dict, directory: str, is_parallel: bool) -> None:
# `directory` is not used for the flash file write because it does not go through sirepo.lib
model = self._edit_input_file_schema(kwarg_dict)

# This function is always being called in a worker run directory so path is just file name
model.write_files(self.setup.input_file.name)

def _edit_input_file_schema(self, kwarg_dict: dict) -> _Model:
# This editor has no protection on value typing because we have no Sirepo schema
model = copy.deepcopy(self.input_file_model)
for n, v in kwarg_dict.items():
# Parser sets all keys to be lower case
n_lower = n.lower()
assert n_lower in model.keys(), f"Parameter/Setting: {n} is not defined and cannot be edited."
model[n_lower] = v

model = _Model(model)

return model

@cached_property
def input_file_model(self) -> dict or None:
d = parse_file(self.setup.input_file)

return d
47 changes: 47 additions & 0 deletions rsopt/codes/genesis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
import typing
from functools import cached_property
from rsopt.codes.models import base_model, genesis_model
from rsopt.codes.parsers import genesis_parser
from rsopt.codes.writers import write
from rsopt.configuration.schemas import code
from rsopt.configuration.schemas import setup as setup_schema

class Setup(setup_schema.Setup):
pass

class Genesis(code.Code):
code: typing.Literal['genesis'] = 'genesis'
setup: Setup

@classmethod
def serial_run_command(cls) -> str or None:
return 'genesis'

@classmethod
def parallel_run_command(cls) -> str or None:
return 'genesis_mpi'

def generate_input_file(self, kwarg_dict: dict, directory: str, is_parallel: bool) -> None:
# There are only commands so this is simpler than some other tracking code cases
genesis_model_instance = self.input_file_model.model_copy(deep=True)

for name, value in kwarg_dict.items():
item_model = self.get_parameter_or_setting(name)
genesis_model_instance.edit_command(command_name=genesis_model.GENESIS_COMMAND_NAME,
parameter_name=item_model.item_name,
parameter_value=value,
command_index=item_model.item_index
)
# TODO: Right now we don't handle linking of resources like the geometry file. User will need to do that in rsopt config
write.write_to_file(
write.write_model(genesis_model_instance, write.Genesis),
filename=self.setup.input_file.name,
path=directory
)

@cached_property
def input_file_model(self) -> base_model.CommandModel:
input_file_model = genesis_parser.parse_simulation_input_file(self.setup.input_file, self.code,None,False)
model_schema = base_model.generate_model(genesis_model.Genesis, self.code)

return model_schema.model_validate(input_file_model)
20 changes: 20 additions & 0 deletions rsopt/codes/madx.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import typing
from rsopt.codes import elegant
from rsopt.configuration.schemas import setup as setup_schema
from rsopt.libe_tools.executors import EXECUTION_TYPES

class Setup(setup_schema.Setup):
execution_type: typing.Literal[EXECUTION_TYPES.SERIAL]

class Madx(elegant.Elegant):
code: typing.Literal['madx'] = 'madx'
setup: Setup

@classmethod
def serial_run_command(cls) -> str or None:
return 'madx'

@classmethod
def parallel_run_command(cls) -> str or None:
# Execution type for madx is limited to serial by model
return None
18 changes: 18 additions & 0 deletions rsopt/codes/models/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
class FormatterRegistry:
_formatters = {}

@classmethod
def register(cls, key):
"""Decorate formatting functions to register."""
def decorator(func):
cls._formatters[key] = func
return func
return decorator

@classmethod
def get_formatter(cls, key):
return cls._formatters.get(key, cls.default_formatter)

@staticmethod
def default_formatter(value):
return value
83 changes: 83 additions & 0 deletions rsopt/codes/models/base_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import pydantic
import typing

def _is_union(tp) -> bool:
# Handling of Unions seems very inconsistent with other Python objects in 3.9
# Union[int] -> int and top suggestions:
# - https://stackoverflow.com/questions/45957615/how-to-check-a-variable-against-union-type-during-runtime
# - https://stackoverflow.com/questions/75219678/check-if-a-type-is-union-type-in-python
# - https://github.com/python/typing/issues/528 (!)
# Assume you will get only a type back or are just broken now

if hasattr(typing, 'get_origin'): # Python 3.8+
return typing.get_origin(tp) is typing.Union
return isinstance(tp, typing._GenericAlias) and tp.__origin__ is typing.Union # Python <3.8

DISCRIMINATOR_NAME = 'command_name'
T = typing.TypeVar('T')
class CommandModel(pydantic.BaseModel, typing.Generic[T]):
commands: typing.List[T] = pydantic.Field(discriminator=DISCRIMINATOR_NAME)


# Use a model_validator to distribute from the command list to command fields by name
# This method mean each command-name field and 'commands' will contain lists pointing to common objects
# allowing the edit_command to only search and edit in one place
@pydantic.model_validator(mode='after')
def generate_command_name_fields(self):
for command in self.commands:
# CommandModel will have a field, that is a list, corresponding to each command name type
# add each command model instance to the field corresponding to its name
getattr(self, getattr(command, DISCRIMINATOR_NAME)).append(command)

return self


@pydantic.validate_call
def edit_command(self, command_name: typing.Annotated[str, pydantic.StringConstraints(to_lower=True)],
parameter_name: str,
parameter_value: typing.Any,
command_index: int or None = None) -> None:
"""Edit a copy of a command model and return CommandContainer with the copy.
Args:
command_name: (str) Name of the command type to edit.
parameter_name: (str) Name of the parameter to edit.
parameter_value: (typing.Any) New value of the parameter.
command_index: (int) Index of the command to edit, required if the command is used multiple times.
Returns:
"""
command_list = getattr(self, command_name)
assert (len(command_list) == 1) or (command_index is not None), \
(f'Command {command_name} has {len(command_list)} instances. '
f'`command_index` must be set.')

command_index = command_index if command_index is not None else 0

setattr(command_list[command_index], parameter_name, parameter_value)


def generate_model(model_T, code_name):
model_type = typing.Annotated[model_T, pydantic.Field(discriminator=DISCRIMINATOR_NAME)]
base_model = CommandModel[model_type]

model_name = f"{code_name}Model"

dynamic_fields = {}
if _is_union(model_T):
# Received Union[Models]
for m in typing.get_args(model_T):
_field = pydantic.Field(default_factory=list, discriminator=DISCRIMINATOR_NAME)
dynamic_fields[m.model_fields[DISCRIMINATOR_NAME].default] = (list[m], _field)
else:
# Just a single Model
_field = pydantic.Field(default_factory=list, discriminator=DISCRIMINATOR_NAME)
dynamic_fields[model_T.model_fields[DISCRIMINATOR_NAME].default] = (list[model_T], _field)

dynamic_model = pydantic.create_model(model_name, __base__=base_model, **dynamic_fields)

return dynamic_model



316 changes: 316 additions & 0 deletions rsopt/codes/models/genesis_model.py

Large diffs are not rendered by default.

319 changes: 319 additions & 0 deletions rsopt/codes/models/spiffe_model.py

Large diffs are not rendered by default.

18 changes: 18 additions & 0 deletions rsopt/codes/opal.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import typing
from rsopt.codes import elegant
from rsopt.configuration.schemas import setup as setup_schema

class Setup(setup_schema.Setup):
pass

class Opal(elegant.Elegant):
code: typing.Literal['opal'] = 'opal'
setup: Setup

@classmethod
def serial_run_command(cls) -> str or None:
return 'opal'

@classmethod
def parallel_run_command(cls) -> str or None:
return 'opal'
46 changes: 46 additions & 0 deletions rsopt/codes/parsers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
import enum
import pickle
import typing
from rsopt import util
class PARSERS(str, enum.Enum):
GENESIS = 'genesis'

def parse_simulation_input_file(input_file: str, code_name, ignored_files: typing.List[str] or None = None,
shifter: bool = False) -> typing.Type['sirepo.lib.SimData'] or None:

if shifter:
# Must pass a list to ignored_files here since it is sent to subprocess
d = _shifter_parse_model(code_name, input_file, ignored_files or [])
else:
import sirepo.lib
d = sirepo.lib.Importer(code_name, ignored_files).parse_file(input_file)

return d


def _shifter_parse_model(name: str, input_file: str, ignored_files: list) -> typing.Type['sirepo.lib.SimData'] or None:
# Sidesteps the difficulty of Sirepo install on NERSC by running a script that parses to the Sirepo model
import shlex
from subprocess import Popen, PIPE

# TODO: These will need to be set up (_DEFAULT_SHIFTER_IMAGE is also used elsewhere)
_SHIFTER_BASH_FILE = pkio.py_path(pkresource.filename('shifter_exec.sh'))
_SHIFTER_SIREPO_SCRIPT = pkio.py_path(pkresource.filename('shifter_sirepo.py'))
_DEFAULT_SHIFTER_IMAGE = 'radiasoft/sirepo:prod'


node_to_use = util.return_unused_node()
if node_to_use:
run_string = f"srun -w {node_to_use} --ntasks 1 --nodes 1 shifter --image={_DEFAULT_SHIFTER_IMAGE} " \
f"/bin/bash {_SHIFTER_BASH_FILE} python {_SHIFTER_SIREPO_SCRIPT}"
run_string = ' '.join([run_string, name, input_file, *ignored_files])
cmd = Popen(shlex.split(run_string), stderr=PIPE, stdout=PIPE)
out, err = cmd.communicate()
if err:
print(err.decode())
raise Exception('Model load from Sirepo in Shifter failed.')
d = pickle.loads(out)
else:
d = None

return util.broadcast(d)
18 changes: 18 additions & 0 deletions rsopt/codes/parsers/flash_parser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
import re

# TODO: flash does not use lark yet and does not share a common interface with other codes in parsers.
# Should be updated to use lark.

def parse_file(par_path: str) -> dict:
"""Read in a FLASH .par file and return a dictionary of parameters and values."""

res = {}
par_text = open(par_path)

for line in par_text.readlines():
line = re.sub(r"#.*$", "", line)
m = re.search(r"^(\w.*?)\s*=\s*(.*?)\s*$", line)
if m:
f, v = m.group(1, 2)
res[f.lower()] = v
return res
49 changes: 49 additions & 0 deletions rsopt/codes/parsers/genesis_parser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import lark
import re

# TODO: Set path to resource
parser = lark.Lark.open('../../package_data/grammars/genesis.lark', g_regex_flags=re.I, rel_to=__file__)


# TODO: This follows parse function in init for now. Should unify the definitions once parsers are all converted.
def parse_simulation_input_file(input_file: str, code_name='genesis', ignored_files: list[str] or None = None,
shifter: bool = False) -> list[dict]:
"""Parse a genesis simulation input file.
Args:
input_file: (str) the path to the input file
code_name: (str) the name of the code. Included for compatibility.
ignored_files: list[str] Not used. Included for compatibility.
shifter: (bool) Not used. Included for compatibility.
Returns:
"""
with open(input_file, 'r') as f:
genesis_file = f.read()
genesis_input = parser.parse(genesis_file)

return Transformer().transform(genesis_input)


class Transformer(lark.Transformer):
def file(self, params):
# return list of single command to follow structure for base_model
cmd = {p[0]: p[1] for p in params}
cmd['command_name'] = 'newrun'
return {'commands': [cmd,]}
def NAME(self, v):
return str(v).lower()
def NUMBER(self, v):
return float(v)
def INT(self, v):
return int(v)
def STRING(self, v):
return str(v).strip("\"")
def parameter(self, param):
k = param[0]
if len(param[1:]) > 1:
v = [p for p in param[1:]]
else:
v = param[1]
return k, v
106 changes: 106 additions & 0 deletions rsopt/codes/parsers/spiffe_parser.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
import lark
import re

# TODO: Set path to resource
parser = lark.Lark.open('../../package_data/grammars/spiffe.lark', g_regex_flags=re.I, rel_to=__file__)

# TODO: This follows parse function in init for now. Should unify the definitions once parsers are all converted.
def parse_simulation_input_file(input_file: str, code_name='spiffe', ignored_files: list[str] or None = None,
shifter: bool = False) -> list[dict]:
"""Parse a spiffe simulation input file.
Args:
input_file: (str) the path to the input file
code_name: (str) the name of the code. Included for compatibility.
ignored_files: list[str] Not used. Included for compatibility.
shifter: (bool) Not used. Included for compatibility.
Returns:
"""
with open(input_file, 'r') as f:
spiffe_file = f.read()
spiffe_input = parser.parse(spiffe_file)

return Transformer().transform(spiffe_input)

class Transformer(lark.Transformer):
def __init__(self):
self._macros = {}
self._rpn = {}
self._cmd = {}

self._defer_rpn = True
self._defer_cmd = True

def NEWLINE(self, token):
return lark.Discard

def ESCAPED_RPN_START(self, token):
return lark.Discard

def ESCAPED_RPN_END(self, token):
return lark.Discard

def ESCAPED_SHELL_START(self, token):
return lark.Discard

def ESCAPED_SHELL_END(self, token):
return lark.Discard

def SHELL_CONTENT(self, token):
self._cmd[token] = token
return '"{' + self._cmd[token] + '}"'

def RPN_CONTENT(self, token):
self._rpn[token] = token
return '"(' + self._rpn[token] + ')"'

def command_name(self, token):
return str(token)

def command_end(self, token):
return lark.Discard

def start(self, cmds):
command_list = [cmd for cmd in cmds if cmd]

command_data = {}

command_data['commands'] = command_list
return command_data

def valid_command(self, cmd):
cmd_name = cmd[0]
cmd_parameters = cmd[1] if len(cmd) > 1 else []
# TODO: OK to import rsopt.codes.models.base_model.DISCRIMINATOR_NAME?
return {'command_name': cmd_name, **{k: v for (k, v) in cmd_parameters}}

def ignore(self, cmd):
return

def parameter_list(self, plist):
return list(plist)

def parameter(self, param):
k, v = param
return k, v

def NAME(self, w):
return str(w)

def numbers(self, n):
(n,) = n
return float(n)

def string(self, s):
(s,) = s
return str(s)

def shell_cmd(self, cmd):
(cmd,) = cmd
return cmd

def rpn_expression(self, rpn):
(rpn,) = rpn
return rpn
94 changes: 94 additions & 0 deletions rsopt/codes/python.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
import jinja2
import pathlib
import pydantic
import sys
import typing
from rsopt.configuration.schemas import code
from rsopt.configuration.schemas import setup as setup_schema
from rsopt import util

_PARALLEL_PYTHON_TEMPLATE = 'run_parallel_python.py.jinja'
_PARALLEL_PYTHON_RUN_FILE = 'run_parallel_python.py'

# TODO: This will need to be set once installation is updated
_TEMPLATE_PATH = ''

class Setup(setup_schema.Setup):
input_file: pydantic.FilePath
serial_python_mode: typing.Literal["process", "thread", "worker"] = 'worker'
function: str or callable
argument_passing: code.ArgumentModes = pydantic.Field(code.ArgumentModes.KWARGS)

class Python(code.Code):
code: typing.Literal["python"]
setup: Setup

_function: typing.Callable

@pydantic.model_validator(mode='after')
def instantiate_function(self):
# libEnsemble workers change active directory - sys.path will not record locally available modules
sys.path.append('python')

module = util.run_path_as_module(self.setup.input_file)
function = getattr(module, self.setup.function)
self._function = function

return self

@property
def get_function(self) -> typing.Callable:
return self._function

@classmethod
def serial_run_command(cls) -> str or None:
# serial not executed by Executor subprocess so no run command is needed
return None

@classmethod
def parallel_run_command(cls) -> str or None:
return 'python'

@property
def _get_filename(self) -> str:
# Serial python is run on worker so this is never used unless is_parallel==True
filename = _PARALLEL_PYTHON_RUN_FILE

return filename

@property
def get_sym_link_targets(self) -> set:
if self.use_executor:
return {self.setup.input_file}
return set()

@property
def use_executor(self) -> bool:
if self.setup.force_executor or self.use_mpi:
return True
return False

def generate_input_file(self, kwarg_dict, directory, is_parallel):
if not self.use_executor:
return None

template_loader = jinja2.FileSystemLoader(searchpath=_TEMPLATE_PATH)
template_env = jinja2.Environment(loader=template_loader)
template = template_env.get_template(_PARALLEL_PYTHON_TEMPLATE)

dict_item_str = {}
for k, v in kwarg_dict.items():
if type(v) == str:
dict_item_str[k] = v
for k in dict_item_str.keys():
kwarg_dict.pop(k)

output_template = template.render(dict_item=kwarg_dict, dict_item_str=dict_item_str,
full_input_file_path=self.setup.input_file,
function=self.setup.function)

file_path = pathlib.Path(directory).joinpath(_PARALLEL_PYTHON_RUN_FILE)

with open(file_path, 'w') as ff:
ff.write(output_template)

560 changes: 0 additions & 560 deletions rsopt/codes/radia/sim_functions.py

This file was deleted.

48 changes: 48 additions & 0 deletions rsopt/codes/spiffe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import typing
from functools import cached_property
from rsopt.codes.models import base_model, spiffe_model
from rsopt.codes.parsers import spiffe_parser
from rsopt.codes.writers import write
from rsopt.configuration.schemas import code
from rsopt.configuration.schemas import setup as setup_schema

class Setup(setup_schema.Setup):
pass

class Spiffe(code.Code):
code: typing.Literal['spiffe'] = 'spiffe'
setup: Setup

@classmethod
def serial_run_command(cls) -> str or None:
return 'spiffe'

@classmethod
def parallel_run_command(cls) -> str or None:
return 'spiffe'

def generate_input_file(self, kwarg_dict: dict, directory: str, is_parallel: bool) -> None:
# There are only commands so this is simpler than some other tracking code cases
spiffe_model_instance = self.input_file_model.model_copy(deep=True)

for name, value in kwarg_dict.items():
item_model = self.get_parameter_or_setting(name)
spiffe_model_instance.edit_command(command_name=item_model.item_name,
parameter_name=item_model.item_attribute,
parameter_value=value,
command_index=item_model.item_index
)
# TODO: Right now we don't handle linking of resources like the geometry file. User will need to do that in rsopt config
write.write_to_file(
write.write_model(spiffe_model_instance, write.Spiffe),
filename=self.setup.input_file.name,
path=directory
)

@cached_property
def input_file_model(self) -> base_model.CommandModel:
input_file_model = spiffe_parser.parse_simulation_input_file(self.setup.input_file, self.code,None,False)

model_schema = base_model.generate_model(spiffe_model.SPIFFE, self.code)

return model_schema.model_validate(input_file_model)
41 changes: 41 additions & 0 deletions rsopt/codes/user.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import typing
import pathlib
import pydantic
from rsopt.configuration.schemas import code
from rsopt.configuration.schemas import setup as setup_schema
from rsopt import util
class Setup(setup_schema.Setup):
run_command: str
file_mapping: dict = pydantic.Field(default_factory=dict)
input_file: str = ''
file_definitions: pydantic.FilePath = None

@pydantic.field_validator("input_file", mode="before")
@classmethod
def coerce_none_to_empty(cls, v):
# Setup for user mode should accept an explicit value of None
return "" if v is None else v

class User(code.Code):
code: typing.Literal['user'] = 'user'
setup: Setup

@property
def serial_run_command(self) -> str or None:
return self.setup.run_command

@property
def parallel_run_command(self) -> str or None:
return self.setup.run_command

def generate_input_file(self, kwarg_dict: dict, directory: str, is_parallel: bool) -> None:
# Get strings for each file and fill in arguments for this job
base_run_path = pathlib.Path.cwd().expanduser()

for key, val in self.setup.file_mapping.items():
module_path = pathlib.Path(base_run_path).joinpath(self.setup.file_definitions)
module = util.run_path_as_module(module_path)

local_file_instance = module().__getattribute__(key).format(**kwarg_dict)
with open(pathlib.Path(directory).joinpath(val), 'w') as ff:
ff.write(local_file_instance)
123 changes: 0 additions & 123 deletions rsopt/codes/warp/libe_sim.py

This file was deleted.

56 changes: 0 additions & 56 deletions rsopt/codes/warp/tec_utilities.py

This file was deleted.

File renamed without changes.
41 changes: 41 additions & 0 deletions rsopt/codes/writers/write.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
import enum
import pathlib
import pydantic
from rsopt.codes.models import base_model

# TODO: This will go into code specific places I guess
class Elegant(enum.Enum):
CMD = "&{command_name}\n"
PARAM = "\t{field} = {value},\n"
END = "&end\n\n"
class Spiffe(enum.Enum):
CMD = "&{command_name}\n"
PARAM = "\t{field} = {value},\n"
END = "&end\n\n"
class Genesis(enum.Enum):
CMD = '$newrun\n'
PARAM = "{field} = {value}\n"
END = '$end\n'

def write_command(command: pydantic.BaseModel, structure: enum.Enum) -> str:
model_dump_dict = command.model_dump(exclude_defaults=True, by_alias=True)
cmd_string = structure.CMD.value.format(**{'command_name': command.command_name, **model_dump_dict})
for key, value in model_dump_dict.items():
cmd_string += structure.PARAM.value.format(field=key, value=value)

return cmd_string + structure.END.value

def write_model(model: base_model.CommandModel, structure: enum.Enum) -> str:
string_model = ""
for m in model.commands:
string_model += write_command(m, structure)

return string_model

def write_to_file(model_string: str, filename: str, path: str = '.') -> None:
filepath = pathlib.Path(path) / filename
with open(filepath, "w") as f:
f.write(model_string)

# TODO: Should have a write to file function defined
# TODO: Will need to handle supporting file linking or writing
31 changes: 16 additions & 15 deletions rsopt/configuration/__init__.py
Original file line number Diff line number Diff line change
@@ -1,15 +1,16 @@
from rsopt.configuration.parameters import PARAMETER_READERS, Parameters
from rsopt.configuration.settings import SETTING_READERS, Settings
from rsopt.configuration.setup import SETUP_READERS
from rsopt.configuration.setup.setup import Setup
from rsopt.configuration.setup.setup import SetupTemplated
from rsopt.configuration.setup.elegant import Elegant
from rsopt.configuration.setup.genesis import Genesis
from rsopt.configuration.setup.opal import Opal
from rsopt.configuration.setup.madx import Madx
from rsopt.configuration.setup.python import Python
from rsopt.configuration.setup.user import User
from rsopt.configuration.setup.flash import Flash
from rsopt.configuration.options import Options
from rsopt.configuration.configuration import Configuration
from rsopt.configuration.jobs import Job
# TODO: Remove all if not needed after pydantic integration
# from rsopt.configuration.parameters import PARAMETER_READERS, Parameters
# from rsopt.configuration.settings import SETTING_READERS, Settings
# from rsopt.configuration.setup import SETUP_READERS
# from rsopt.configuration.setup.setup import Setup
# from rsopt.configuration.setup.setup import SetupTemplated
# from rsopt.configuration.setup.elegant import Elegant
# from rsopt.configuration.setup.genesis import Genesis
# from rsopt.configuration.setup.opal import Opal
# from rsopt.configuration.setup.madx import Madx
# from rsopt.configuration.setup.python import Python
# from rsopt.configuration.setup.user import User
# from rsopt.configuration.setup.flash import Flash
# # from rsopt.configuration.options import Options
# from rsopt.configuration.schemas.configuration import Configuration
# # from rsopt.configuration.schemas import Job
3 changes: 2 additions & 1 deletion rsopt/configuration/jobs.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
# TODO: Evaluate if this module can be removed
from pykern import pkio
from pykern import pkresource
from rsopt.configuration.parameters import PARAMETER_READERS, Parameters
from rsopt.configuration.settings import SETTING_READERS, Settings
from rsopt.configuration.setup import SETUP_READERS
from rsopt.configuration.setup.setup import Setup
from rsopt.codes import serial_python
from rsopt.libe_tools import serial_python
import jinja2
import pathlib
import typing
146 changes: 33 additions & 113 deletions rsopt/configuration/options/__init__.py
Original file line number Diff line number Diff line change
@@ -1,120 +1,40 @@
import pathlib
import typing
from pykern import pkyaml
from rsopt import OPTIONS_SCHEMA, OPTIMIZER_SCHEMA


_TYPE_MAPPING = {
# Map typing from schema to Python types
'None': type(None),
'bool': bool,
'str': str,
'int': int,
'float': float,
'list': list,
'dict': dict
}


class Options:
NAME = 'options'
__REQUIRED_KEYS = ('software',)
_REGISTERED_OPTIONS = pkyaml.load_file(OPTIONS_SCHEMA)
_OPT_SCHEMA = pkyaml.load_file(OPTIMIZER_SCHEMA)
REQUIRED_OPTIONS = ()

def __init__(self):
self._objective_function = []
self.exit_criteria = {}
self.software_options = {}
self.executor_options = {}
self.software = ''
self.method = ''
self.seed = ''
self.sym_links = []
self.nworkers = 2
self.use_worker_dirs = True
self.sim_dirs_make = True
self.copy_final_logs = True
self.run_dir = './ensemble'
self.record_interval = 0
self.output_file = ''
# use_zero_resource is not set in options_schema and thus cannot be set by the user
self.use_zero_resource = True
from enum import Enum
from rsopt.configuration.options import dfols, dlib, pysot, nsga2, scipy, nlopt, mobo, aposmm
from rsopt.configuration.options import mesh, lh


class SUPPORTED_OPTIONS(Enum):
mesh_scan = ('sample', mesh.Mesh)
dfols = ('optimize', dfols.Dfols)
dlib = ('optimize', dlib.Dlib)
pysot = ('optimize', pysot.Pysot)
lh_scan = ('sample', lh.LH)
nsga2 = ('optimize', nsga2.Nsga2)
scipy = ('optimize', scipy.Scipy)
nlopt = ('optimize', nlopt.Nlopt)
mobo = ('optimize', mobo.Mobo)
aposmm = ('optimize', aposmm.Aposmm)

def __init__(self, typing, model):
self.typing = typing
self.model = model

@classmethod
def get_option(cls, options_instance):
# Imported locally to prevent circular dependency
from rsopt.configuration.options.options import option_classes

# Required setup checks for options
cls.__check_options(options_instance)
software = options_instance.pop('software')

# Implementation specific checks
option_classes[software]._check_options(options_instance)

return option_classes[software]
def get_sample_names(cls) -> tuple[str, ...]:
"""Returns a list of option names that can be used by sample command"""
return tuple([name for name, member in cls.__members__.items() if member.value[0] == 'sample'])

@classmethod
def __check_options(cls, options):
# First check for options required by base class
# Inherited classes check for their requirements with _check_options
for key in cls.__REQUIRED_KEYS:
assert options.get(key), f"{key} must be defined in options"
def get_sample_models(cls) -> tuple:
"""Returns a list of models for options that can be used by sample command"""
return tuple([name.model for name in cls if getattr(cls, name.name).value[0] == 'sample'])

@classmethod
def _check_options(cls, options):
# Ensure all required keys/values for an options class are in the parsed input
name = cls.NAME
for key in cls.REQUIRED_OPTIONS:
assert options.get(key), f"{key} must be defined in options for {name}"

@property
def objective_function(self) -> typing.List[str]:
_obj_f = self._objective_function.copy()
if len(self._objective_function) == 2:
_obj_f[0] = str(pathlib.Path(self._objective_function[0]).resolve())

return _obj_f

@objective_function.setter
def objective_function(self, objective_function: typing.List[str]):
assert len(objective_function) == 2 or len(objective_function) == 0, "If options.objective_function is set " \
"it should contain: " \
"[path to module, function name]"
self._objective_function = objective_function

def _validate_input(self, name, value):
# Ensure each option key is recognized. Check typing for each option value.
co = self._REGISTERED_OPTIONS[self.NAME]
if name not in co.keys() and name not in self.REQUIRED_OPTIONS:
raise KeyError(f'options {name} was not recognized')
else:
allowed_types = co[name]['typing']
value_pass = isinstance(value, tuple(_TYPE_MAPPING[t] for t in allowed_types))
if not value_pass:
received_type = type(value)
raise TypeError(f'{name} must be one of types {allowed_types}, but received {received_type}')

return True

def parse(self, name, value):
# Options has a fairly strict set of expected values
# so we can impose more checks here compared to other categories
if self._validate_input(name, value):
self.__setattr__(name, value)

def get_sim_specs(self):
# This is the most common sim_spec setting. It is updated by libEnsembleOptimizer if a different value needed.
# TODO: It's not clear why this is defined here.
# It is something that will vary by Option class but is otherwise static.
# Maybe it should be put into the options schema?
sim_specs = {
'in': ['x'],
'out': [('f', float), ]
}

return sim_specs

def get_optimize_names(cls) -> tuple[str, ...]:
"""Returns a list of option names that can be used by optimize command"""
return tuple([name for name, member in cls.__members__.items() if member.value[0] == 'optimize'])

@classmethod
def get_optimize_models(cls) -> tuple:
"""Returns a list of models for options that can be used by optimize command"""
return tuple([name.model for name in cls if getattr(cls, name.name).value[0] == 'optimize'])
127 changes: 85 additions & 42 deletions rsopt/configuration/options/aposmm.py
Original file line number Diff line number Diff line change
@@ -1,42 +1,85 @@
from rsopt.configuration.options import Options
from rsopt.configuration import options


class Aposmm(Options):
NAME = 'aposmm'
REQUIRED_OPTIONS = ('method', 'exit_criteria', 'initial_sample_size')
# Only can allow what aposmm_localopt_support handles right now
ALLOWED_METHODS = ('LN_BOBYQA', 'LN_SBPLX', 'LN_COBYLA', 'LN_NEWUOA',
'LN_NELDERMEAD', 'LD_MMA')
SOFTWARE_OPTIONS = {
'high_priority_to_best_localopt_runs': True,
'max_active_runs': 1,
'initial_sample_size': 0
}

def __init__(self):
super().__init__()

self.load_start_sample = ''

for key, val in self.SOFTWARE_OPTIONS.items():
self.__setattr__(key, val)

def get_sim_specs(self):
# Imported locally to prevent circular dependency
from rsopt.configuration.options.options import option_classes

def split_method(method_name):
software, method = method_name.split('.')
return software, method
software, method = split_method(self.method)
sim_specs = self._OPT_SCHEMA[software]['methods'][method]['sim_specs']
opt_method = option_classes[software]()
for key in opt_method.REQUIRED_OPTIONS:
if key == 'exit_criteria' or key == 'method':
continue
assert self.software_options.get(key), f'Use of {software}.{method} with APOSMM requires that {key} be ' \
f'set in software_options '
opt_method.parse(key, self.software_options.pop(key))
sim_specs['out'] = [tuple(t) if len(t) == 2 else (*t[:2], opt_method.__getattribute__(t[2])) for t in sim_specs['out']]
return sim_specs
from rsopt.configuration.schemas import options
import pydantic
import types
import typing
import importlib
import inspect
import pkgutil

# TODO: `package` here needs to have a __path__ defined so it can't just be a module. But there appears to only
# be type hinting for a module. Unless this a boneless wings type situation...
def find_subclasses_of_method(package: types.ModuleType):
# Search modules in `package` and return all classes that subclass `_target_method` but are not `_target_method`
_target_method = options.Method
matching_subclasses = []

for _, module_name, _ in pkgutil.iter_modules(package.__path__, package.__name__ + "."):
module = importlib.import_module(module_name)

for name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, _target_method) and obj is not _target_method:
matching_subclasses.append(obj)

return matching_subclasses

def get_local_opt_methods():
from rsopt.configuration import options as pkg_options
opt_methods = find_subclasses_of_method(pkg_options)
supported_local_opt_methods = typing.Union[tuple([m for m in opt_methods if m.aposmm_support])]

return supported_local_opt_methods


class AposmmOptions(pydantic.BaseModel, extra='forbid'):
initial_sample_size: int
max_active_runs: int = None
local_opt_options: options.SoftwareOptions
load_start_sample: pydantic.FilePath = None
# dist_to_bound_multiple: float =
# lhs_divisions
# mu
# nu
# rk_const
# stop_after_k_minima
# stop_after_k_runs

@pydantic.model_validator(mode='after')
def set_default_max_active_runs(self, _):
if self.max_active_runs is None:
self.max_active_runs = self.parent.nworkers - 1

@pydantic.model_validator(mode='before')
@classmethod
def validate_local_opt_options(cls, v, validation_info):
# NOTE: receives (dict, ValidationInfo)
print('self', v, validation_info)
model = validation_info.data['method'].option_spec
v['local_opt_options'] = model.model_validate(v['local_opt_options'])

return v


class Aposmm(options.OptionsExit):
software: typing.Literal['aposmm'] = 'aposmm'
method: get_local_opt_methods() = pydantic.Field(discriminator='name')
software_options: AposmmOptions

@pydantic.model_validator(mode='after')
def set_software_options_parent(self):
self.software_options.parent = self
return self

@pydantic.model_validator(mode='after')
def initialize_dynamic_outputs(self):
for param, output_type in self.method.sim_specs.dynamic_outputs.items():
if hasattr(self, param):
size = getattr(self, param)
elif hasattr(self.software_options.local_opt_options, param):
size = getattr(self.software_options, param)
else:
raise AttributeError(f"{param} not a member of {self}")
self.method.sim_specs._initialized_dynamic_outputs.append(
output_type + (size,)
)

return self
45 changes: 22 additions & 23 deletions rsopt/configuration/options/dfols.py
Original file line number Diff line number Diff line change
@@ -1,29 +1,28 @@
import rsopt.configuration
from rsopt.configuration.schemas import options
import pydantic
import typing


class Dfols(rsopt.configuration.Options):
NAME = 'dfols'
REQUIRED_OPTIONS = ('exit_criteria', 'components')
class DfolsOptions(options.SoftwareOptions, extra='allow'):
components: int

def __init__(self):
super().__init__()
self.components = 1
self.method = 'dfols'

@classmethod
def _check_options(cls, options):
for key in cls.REQUIRED_OPTIONS:
assert options.get(key), f"{key} must be defined in options to use {cls.NAME}"

if 'software_options' in options.keys():
options['software_options'].setdefault('components', options.get('components'))
else:
options['software_options'] = {'components': options.get('components')}

def get_sim_specs(self):
sim_specs = {
'in': ['x'],
'out': [('f', float), ('fvec', float, self.components)]
class MethodDfols(options.Method):
name: typing.Literal['dfols'] = 'dfols'
parent_software = 'dfols'
aposmm_support = True
local_support = True
persis_in = ['f', 'fvec']
sim_specs = options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={
'components': ('fvec', float)
}
)
option_spec = DfolsOptions

return sim_specs
class Dfols(options.OptionsExit):
software: typing.Literal['dfols']
method: typing.Union[MethodDfols] = pydantic.Field(default=MethodDfols(), discriminator='name')
software_options: DfolsOptions
32 changes: 23 additions & 9 deletions rsopt/configuration/options/dlib.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,26 @@
from rsopt.configuration.options import Options
from rsopt.configuration.schemas import options
import pydantic
import typing


class Dlib(Options):
NAME = 'dlib'
REQUIRED_OPTIONS = ('exit_criteria',)
SOFTWARE_OPTIONS = {}
class DlibOptions(options.SoftwareOptions, extra='forbid'):
pass

def __init__(self):
super().__init__()
self.nworkers = 2
self.method = 'dlib'

class MethodDlib(options.Method):
name: typing.Literal['dlib'] = 'dlib'
aposmm_support = False
local_support = False
persis_in = ['f', 'sim_id']
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec = DlibOptions


class Dlib(options.OptionsExit):
software: typing.Literal['dlib'] = 'dlib'
method: typing.Union[MethodDlib] = pydantic.Field(default=MethodDlib(), discriminator='name')
software_options: DlibOptions = pydantic.Field(default=DlibOptions())
44 changes: 35 additions & 9 deletions rsopt/configuration/options/lh.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,38 @@
from rsopt.configuration.options import Options
from rsopt.configuration.schemas import options
import pydantic
import typing


class LH(Options):
NAME = 'lh_scan'
REQUIRED_OPTIONS = ('batch_size',)
class SoftwareOptionsLH(options.SoftwareOptions):
batch_size: int
seed: typing.Union[int, None, typing.Literal['']] = ''

def __init__(self):
super().__init__()
self.use_zero_resource = False
self.nworkers = 1
self.batch_size = 0

class MethodLatinHypercube(options.Method):
name: typing.Literal['lh_scan'] = 'lh_scan'
aposmm_support = False
local_support = False
persis_in = None
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float),],
)
option_spec = SoftwareOptionsLH


# validate_assigment is used because the outputs field may be updated after initial instantiation
class LH(options.Options, validate_assignment=True):
software: typing.Literal['lh_scan']
method: typing.Union[MethodLatinHypercube] = pydantic.Field(default=MethodLatinHypercube(), discriminator='name')
software_options: SoftwareOptionsLH
nworkers: int = 1
outputs: list[tuple[str, type, int]] = pydantic.Field(default_factory=list)

use_zero_resources: bool = pydantic.Field(default=False, frozen=True)

@pydantic.model_validator(mode='after')
def update_outputs(self):
if len(self.outputs) > 0:
self.method.sim_specs.static_outputs = self.outputs

return self
44 changes: 35 additions & 9 deletions rsopt/configuration/options/mesh.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,38 @@
from rsopt.configuration.options import Options
from rsopt.configuration.schemas import options
import pydantic
import typing


class Mesh(Options):
NAME = 'mesh_scan'
REQUIRED_OPTIONS = ()
class SoftwareOptionsMesh(options.SoftwareOptions):
sampler_repeats: pydantic.PositiveInt = pydantic.Field(default=1, gt=0)
mesh_file: pydantic.FilePath = ''

def __init__(self):
super().__init__()
self.use_zero_resource = False
self.nworkers = 1
self.mesh_file = ''

class MethodMeshScan(options.Method):
name: typing.Literal['mesh_scan'] = 'mesh_scan'
aposmm_support = False
local_support = False
persis_in = None
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float),],
)
option_spec = SoftwareOptionsMesh


class Mesh(options.Options, validate_assignment=True):
# validate_assigment is used because the outputs field may be updated after initial instantiation
software: typing.Literal['mesh_scan']
method: typing.Union[MethodMeshScan] = pydantic.Field(default=MethodMeshScan(name='mesh_scan'), discriminator='name')
software_options: SoftwareOptionsMesh = SoftwareOptionsMesh()
nworkers: int = 1
outputs: list[tuple[str, type, int]] = pydantic.Field(default_factory=list)

use_zero_resources: bool = pydantic.Field(default=False, frozen=True)

@pydantic.model_validator(mode='after')
def update_outputs(self):
if len(self.outputs) > 0:
self.method.sim_specs.static_outputs = self.outputs

return self
46 changes: 30 additions & 16 deletions rsopt/configuration/options/mobo.py
Original file line number Diff line number Diff line change
@@ -1,22 +1,36 @@
import rsopt.configuration
from rsopt.configuration.schemas import options
import pydantic
import typing


class Mobo(rsopt.configuration.Options):
NAME = 'mobo'
REQUIRED_OPTIONS = ('exit_criteria', 'objectives', 'constraints')
class MoboOptions(pydantic.BaseModel, extra='forbid'):
# TODO: Ideally we would check that this dict has keys matching parameters
reference_point: dict[str, float]
constraints: dict = pydantic.Field(default_factory=dict)
num_of_objectives: int
min_calc_to_remodel: int = 1

def __init__(self):
super().__init__()
self.nworkers = 2
self.objectives = 1
self.constraints = 0
self.method = 'mobo'
self.reference = []
@property
def num_of_constraints(self):
return len(self.constraints)

def get_sim_specs(self):
sim_specs = {
'in': ['x'],
'out': [('f', float, self.objectives), ('c', float, self.constraints)]

class MethodMobo(options.Method):
name: typing.Literal['mobo'] = 'mobo'
aposmm_support = False
local_support = False
persis_in = ['f', 'c']
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[],
dynamic_outputs={
'num_of_objectives': ('f', float),
'num_of_constraints': ('c', float),
}
)
option_spec = MoboOptions

return sim_specs
class Mobo(options.OptionsExit):
software: typing.Literal['mobo'] = 'mobo'
method: typing.Union[MethodMobo] = pydantic.Field(MethodMobo(), discriminator='name')
software_options: MoboOptions
120 changes: 102 additions & 18 deletions rsopt/configuration/options/nlopt.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,102 @@
from rsopt.configuration.options import Options


class Nlopt(Options):
NAME = 'nlopt'
# Ordering of required keys matters to validate method assignment is correct
REQUIRED_OPTIONS = ('method', 'exit_criteria')
# Only can allow what aposmm_localopt_support handles right now
ALLOWED_METHODS = ('LN_BOBYQA', 'LN_SBPLX', 'LN_COBYLA', 'LN_NEWUOA',
'LN_NELDERMEAD', 'LD_MMA')

@classmethod
def _check_options(cls, options):
for key in cls.REQUIRED_OPTIONS:
assert options.get(key), f"{key} must be defined in options to use {cls.NAME}"
proposed_method = options.get(cls.REQUIRED_OPTIONS[0])
assert proposed_method in cls.ALLOWED_METHODS, \
f"{proposed_method} not available for use in software {cls.NAME}"
# TODO: Check local_opt setup for the key name to pass nlopt options to.
# Set up separate libE class like with scipy
from rsopt.configuration.schemas import options
import pydantic
import typing

# Nlopt Generator setup only will pass a few specific arguments so we do not use "extra='allow'" for nlopt
class NloptOptionsBase(options.SoftwareOptions, extra='forbid'):
xtol_rel: float = pydantic.Field(None, description='End optimization if relative tolerance level in function input (x) is reached.')
xtol_abs: float = pydantic.Field(None, description='End optimization if absolute tolerance level in function input (x) is reached.')
ftol_rel: float = pydantic.Field(None, description='End optimization if relative tolerance level in function output (f) is reached.')
ftol_abs: float = pydantic.Field(None, description='End optimization if absolute tolerance level in function output (f) is reached.')

class NloptOptionsMma(NloptOptionsBase):
grad_dimensions: int = pydantic.Field(..., description='Number of dimensions (size) of the gradient data.')

# Naming for nlopt algorithms should follow the NLopt usage
class MethodBobyqa(options.Method):
name: typing.Literal['LN_BOBYQA'] = 'LN_BOBYQA'
parent_software = 'nlopt'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = NloptOptionsBase

class MethodCobyla(options.Method):
name: typing.Literal['LN_COBYLA'] = 'LN_COBYLA'
parent_software = 'nlopt'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = NloptOptionsBase

class MethodNewuoa(options.Method):
name: typing.Literal['LN_NEWUOA'] = 'LN_NEWUOA'
parent_software = 'nlopt'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = NloptOptionsBase

class MethodNelderMead(options.Method):
name: typing.Literal['LN_NELDERMEAD'] = 'LN_NELDERMEAD'
parent_software = 'nlopt'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = NloptOptionsBase

class MethodSubplex(options.Method):
name: typing.Literal['LN_SBPLX'] = 'LN_SBPLX'
parent_software = 'nlopt'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = NloptOptionsBase

class MethodMma(options.Method):
name: typing.Literal['LD_MMA'] = 'LD_MMA'
parent_software = 'nlopt'
aposmm_support = True
local_support = True
persis_in = ['f', 'grad']
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={'grad_dimensions': ('grad', float)}
)
option_spec: typing.ClassVar = NloptOptionsMma
_opt_return_code = [0]

_METHODS = typing.Union[MethodNelderMead, MethodCobyla, MethodBobyqa, MethodNewuoa, MethodSubplex, MethodMma]

class Nlopt(options.OptionsExit):
software: typing.Literal['nlopt']
method: _METHODS = pydantic.Field(..., discriminator='name')
software_options: typing.Union[NloptOptionsBase, NloptOptionsMma] = NloptOptionsBase()
48 changes: 30 additions & 18 deletions rsopt/configuration/options/nsga2.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,35 @@
from rsopt.configuration.options import Options
from rsopt.configuration.schemas import options
import pydantic
import typing


class Nsga2(Options):
NAME = 'nsga2'
REQUIRED_OPTIONS = ('n_objectives', 'exit_criteria',)
SOFTWARE_OPTIONS = {}
class Nsga2Options(pydantic.BaseModel, extra='forbid'):
pop_size: int = pydantic.Field(..., description='Number of individuals in the population.')
n_objectives: int = pydantic.Field(..., description="Number of objectives.")

def __init__(self):
super().__init__()
self.n_objectives = 0
self.nworkers = 2
self.pop_size = 100
# for key, val in self.SOFTWARE_OPTIONS.items():
# self.__setattr__(key, val)
cxpb: float = pydantic.Field(0.8, description="Crossover probability.")
eta: float = pydantic.Field(10.0, description="Scales the mutation rate.")
# Default scaling of indpb is handled in EvolutionaryOptimizer since we need the problem dimensionality
# which is checked by the Configuration object
indpb: float = pydantic.Field(0.8, description="Probability that a gene changes. Default value will be "
"scaled by the individual dimension. If the user sets then "
"the exact value is used")

def get_sim_specs(self):
sim_specs = {
'in': ['individual'],
'out': [('fitness_values', float, self.n_objectives)]
}

return sim_specs
class MethodNsga2(options.Method):
name: typing.Literal['pysot'] = 'nsga2'
aposmm_support = False
local_support = False
persis_in = ['fitness_values', 'sim_id']
sim_specs=options.SimSpecs(
inputs=['individual'],
static_outputs=[],
dynamic_outputs={'n_objectives': ('fitness_values', float)},
)
option_spec = Nsga2Options


class Nsga2(options.OptionsExit):
software: typing.Literal['nsga2'] = 'nsga2'
method: typing.Union[MethodNsga2] = pydantic.Field(MethodNsga2(), discriminator='name')
software_options: Nsga2Options
1 change: 1 addition & 0 deletions rsopt/configuration/options/options.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# TODO: I think this is only used in aposmm. Would be nice to remove.
from rsopt.configuration.options import dfols
from rsopt.configuration.options import dlib
from rsopt.configuration.options import lh
31 changes: 22 additions & 9 deletions rsopt/configuration/options/pysot.py
Original file line number Diff line number Diff line change
@@ -1,12 +1,25 @@
from rsopt.configuration.options import Options
from rsopt.configuration.schemas import options
import pydantic
import typing


class pySOT(Options):
NAME = 'pysot'
REQUIRED_OPTIONS = ('exit_criteria',)
SOFTWARE_OPTIONS = {}
class PysotOptions(pydantic.BaseModel, extra='forbid'):
num_pts: int = pydantic.Field(None, description='Sets the number of points that will be evaluated as part of the experimental planning phase before optimization begins. Defaults to 2 * (DIM + 1) if not set.')

def __init__(self):
super().__init__()
self.nworkers = 2
self.method = 'pysot'

class MethodPysot(options.Method):
name: typing.Literal['pysot'] = 'pysot'
aposmm_support = False
local_support = False
persis_in = ['f', 'sim_id']
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)


class Pysot(options.OptionsExit):
software: typing.Literal['pysot'] = 'pysot'
method: typing.Union[MethodPysot] = pydantic.Field(MethodPysot(), discriminator='name')
software_options: PysotOptions = PysotOptions()
89 changes: 61 additions & 28 deletions rsopt/configuration/options/scipy.py
Original file line number Diff line number Diff line change
@@ -1,28 +1,61 @@
from rsopt.configuration.options import Options


class Scipy(Options):
NAME = 'scipy'
# Ordering of required keys matters to validate method assignment is correct
REQUIRED_OPTIONS = ('method', 'exit_criteria')
# Only can allow what aposmm_localopt_support handles right now
# SciPy routines are internally named ['scipy_Nelder-Mead', 'scipy_COBYLA', 'scipy_BFGS']
# Will use same aliases as scipy uses in keeping with nlopt, and prefix here
ALLOWED_METHODS = ('Nelder-Mead', 'COBYLA', 'BFGS')

_opt_return_codes = {'Nelder-Mead': [0],
'COBYLA': [1],
'BFGS': [0]}

@classmethod
def _check_options(cls, options):
for key in cls.REQUIRED_OPTIONS:
assert options.get(key), f"{key} must be defined in options to use {cls.NAME}"
proposed_method = options.get(cls.REQUIRED_OPTIONS[0])
assert proposed_method in cls.ALLOWED_METHODS, \
f"{proposed_method} not available for use in software {cls.NAME}"

if 'software_options' in options.keys():
options['software_options'].setdefault('opt_return_codes', cls._opt_return_codes[options.get('method')])
else:
options['software_options'] = {'opt_return_codes': cls._opt_return_codes[options.get('method')]}
from rsopt.configuration.schemas import options
import pydantic
import typing


class ScipyOptionsBase(options.SoftwareOptions, extra='allow'):
pass
class ScipyOptionsBfgs(options.SoftwareOptions, extra='allow'):
grad_dimensions: int = pydantic.Field(..., description='Number of dimensions (size) of the gradient data.')


# Naming for SciPy algorithms should follow the scipy.optimize usage
class MethodNelderMead(options.Method):
name: typing.Literal['Nelder-Mead'] = 'Nelder-Mead'
parent_software = 'scipy'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = ScipyOptionsBase
_opt_return_code = [0]

class MethodCobyla(options.Method):
name: typing.Literal['COBYLA'] = 'COBYLA'
parent_software = 'scipy'
aposmm_support = True
local_support = True
persis_in = ['f', ]
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={}
)
option_spec: typing.ClassVar = ScipyOptionsBase
_opt_return_code = [1]

class MethodBfgs(options.Method):
name: typing.Literal['BFGS'] = 'BFGS'
parent_software = 'scipy'
aposmm_support = True
local_support = True
persis_in = ['f', 'grad']
sim_specs=options.SimSpecs(
inputs=['x'],
static_outputs=[('f', float)],
dynamic_outputs={'grad_dimensions': ('grad', float)}
)
option_spec: typing.ClassVar = ScipyOptionsBfgs
_opt_return_code = [0]


_METHODS = typing.Union[MethodNelderMead, MethodCobyla, MethodBfgs]

class Scipy(options.OptionsExit):
software: typing.Literal['scipy']
method: _METHODS = pydantic.Field(..., discriminator='name')
software_options: typing.Union[ScipyOptionsBase, ScipyOptionsBfgs] = ScipyOptionsBase()
3 changes: 3 additions & 0 deletions rsopt/configuration/parameters.py
Original file line number Diff line number Diff line change
@@ -44,6 +44,9 @@ def read_parameter_dict(obj):
PKDict: read_parameter_dict
}

# TODO: Pydantic parameter models either need to go in Parameters.parameters
# or I need to just refactor the access to parameters. The latter could make for better handling
# of new options like 'scale'

class Parameters:
def __init__(self):
File renamed without changes.
183 changes: 183 additions & 0 deletions rsopt/configuration/schemas/code.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
import abc
from collections.abc import Iterable
import enum
import pydantic
import typing
from functools import cached_property
from itertools import chain
from rsopt.configuration.schemas.parameters import Parameter, NumericParameter, CategoryParameter, \
parameter_discriminator, RepeatedNumericParameter, ParameterClasses
from rsopt.configuration.schemas.settings import Setting
from rsopt.configuration.schemas.setup import Setup
from rsopt.libe_tools.executors import EXECUTION_TYPES
from rsopt import util
from rsopt.codes import parsers
from typing_extensions import Annotated

class ArgumentModes(str, enum.Enum):
KWARGS = 'kwargs'
ARGS = 'args'

# TODO: The extra=allow is necessary with the method of dynamic parameter/setting attribute addition. But does mean
# that extra fields a use might have put in the parameters/settings will be silently ignored here
class Code(pydantic.BaseModel, abc.ABC, extra='allow'):
"""Hold data from items in code list of the configuration.
Internally rsopt refers to items in the code list as "Jobs" since each code will be a separate simulation
or calculation that must be performed.
Specific implementations of defined for each code in rsopt.codes.
Instances of Code have parameters and settings dynamically set as attributes.
"""

parameters: list[
Annotated[
typing.Union[Annotated[NumericParameter, pydantic.Tag(ParameterClasses.NUMERICAL)],
Annotated[CategoryParameter, pydantic.Tag(ParameterClasses.CATEGORICAL)],
Annotated[RepeatedNumericParameter, pydantic.Tag(ParameterClasses.REPEATED)]
],
pydantic.Discriminator(parameter_discriminator)
]
] = pydantic.Field(default_factory=list)
settings: list[Setting] = pydantic.Field(default_factory=list)
setup: Setup

# TODO: Make _executor_arguments a computed_field?
# Executor arguments are passed to libEnsemble's Executor submit command
# Will not be set directly by user - set by the libEnsemble setup class from the info in Code
_executor_arguments: dict

@pydantic.field_validator('parameters', mode='before')
@classmethod
def format_parameters_list(cls, parsed_params: dict):
"""
This validator transforms the list of dictionaries from the YAML format
into a format compatible with the Pydantic model by extracting the key as 'code'.
"""
if parsed_params:
return [{"name": k, **v} for k, v in parsed_params.items()]
else:
return []

@pydantic.field_validator('settings', mode='before')
@classmethod
def format_settings_list(cls, parsed_settings: dict):
if parsed_settings:
return [{"name": k, "value": v} for k, v in parsed_settings.items()]
else:
return []

@pydantic.model_validator(mode='after')
def set_dynamic_attributes(self):
for param in self.parameters:
setattr(self, param.name, param)

for setting in self.settings:
setattr(self, setting.name, setting)
return self

@pydantic.model_validator(mode='after')
def instantiate_process_functions(self):
for process in ('preprocess', 'postprocess'):
if getattr(self.setup, process) is not None:
module_path, function_name = getattr(self.setup, process)
module = util.run_path_as_module(module_path)
function = getattr(module, function_name)
else:
function = None
setattr(self, f'get_{process}_function', function)

return self

@classmethod
@abc.abstractmethod
def serial_run_command(cls) -> str or None:
pass

@classmethod
@abc.abstractmethod
def parallel_run_command(cls) -> str or None:
pass

@property
def get_sym_link_targets(self) -> set:
return set()

@classmethod
def _parse_name(cls, name: str) -> dict:
"""Parse parameter/setting name for use in model editing.
Parses rsopt's string formatting for specifying model command/element names and attributes
`command-or-element-name.[command-or-element-attribute].[command-index]`
Args:
name: str
Returns: dict
"""
return Parameter.parse_name({'name': name})

@abc.abstractmethod
def generate_input_file(self, kwarg_dict: dict, directory: str, is_parallel: bool) -> None:
pass

@property
def use_executor(self) -> bool:
"""Use a libEnsemble Executor for this job."""
return True

@property
def use_mpi(self) -> bool:
"""Should the executor use MPI."""
return self.setup.execution_type != EXECUTION_TYPES.SERIAL

@property
def run_command(self):
"""Generate the run command string to be given to the libEnsemble Executor."""
if self.use_mpi:
return self.parallel_run_command()
return self.serial_run_command()

# TODO: if parse_simulation_input_file raises an error it is not terminal and instead input_file_model is not defined
# TODO: clean up typing as part of the Lark implementation (right now this could be dict / sirepo SimData / or maybe None)
@cached_property
def input_file_model(self) -> dict or None:
input_file_model = parsers.parse_simulation_input_file(self.setup.input_file, self.code,
self.setup.ignored_files,
self.setup.execution_type == EXECUTION_TYPES.SHIFTER)
return input_file_model

def get_kwargs(self, x: typing.Any) -> (Iterable, dict):
"""Create a dictionary with parameter/setting names paired to values in iterable x.
Pair settings and parameters in the job with concrete values that will be used in a simulation.
Serves a mapping between the vector objects needed for most optimization / samping routines and
structured data the user sees.
Args:
x: Usually an iterable (will be made Iterable if single valued). Should be equal in length
to len(parameters) + len(settings).
Returns: (dict)
"""
parameters_dict = {}
if not isinstance(x, Iterable):
x = [x, ]
for val, name in zip(x, [param.name for param in self.parameters]):
parameters_dict[name] = val

settings_dict = {s.name: s.value for s in self.settings}

# Vector parameters are flattened into args but retained as a vector, associated with a single key, in kwargs
args = list(chain.from_iterable(v if isinstance(v, Iterable) else [v] for v in x))

return args, {**parameters_dict, **settings_dict}

def get_parameter_or_setting(self, name: str) -> Parameter or Setting:
for item in [*self.parameters, *self.settings]:
if item.name == name:
return item

raise NameError(f"{name} could not be found in Parameter or Setting lists for Code {self.code}")
184 changes: 184 additions & 0 deletions rsopt/configuration/schemas/configuration.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
from collections.abc import Iterable
from itertools import chain
from typing import Any

import pydantic
import typing

# import rsopt.codes
from rsopt import codes

import rsopt.configuration.options
import pydantic_core

import numpy as np

from rsopt.configuration.options import SUPPORTED_OPTIONS

_SUPPORTED_CODES = typing.Annotated[codes.SUPPORTED_CODES, pydantic.Field(discriminator='code')]
_SUPPORTED_SAMPLE_OPTIONS = typing.Annotated[typing.Union[rsopt.configuration.options.SUPPORTED_OPTIONS.get_sample_models()], pydantic.Field(discriminator='software')]
_SUPPORTED_OPTIMIZER_OPTIONS = typing.Annotated[typing.Union[rsopt.configuration.options.SUPPORTED_OPTIONS.get_optimize_models()], pydantic.Field(discriminator='software')]



class ConfigurationSample(pydantic.BaseModel, extra='forbid'):
codes: list[_SUPPORTED_CODES] = pydantic.Field(discriminator='code')
options: _SUPPORTED_SAMPLE_OPTIONS = pydantic.Field(discriminator='software')

# MPI Communicator fields
# For libEnsemble use - should not be used to determine individual code parallel/serial operation
comms: typing.Literal['mpi', 'local'] = pydantic.Field(default='local', exclude=True)
mpi_size: int = pydantic.Field(default=0, exclude=True)
is_manager: bool = pydantic.Field(default=True, exclude=True)
mpi_comm: typing.Any = pydantic.Field(default=None, exclude=True)

def model_post_init(self, __context: Any) -> None:
"""Load simulation models before libEnsemble starts up"""

# TODO: Errors in code.input_file_model do not seem to get raised, you will see instead get something like:
# "AttributeError: 'Genesis' object has no attribute 'input_file_model'"
# here as the error. All attempts to create a simple of example of this happening outside of rsopt have
# failed to reproduce the issue so far.

# input_file_model is a cached_property so it will be stored
if self.options.load_models_at_startup:
for c in self.codes:
_ = c.input_file_model

@pydantic.field_validator('codes', mode='before')
@classmethod
def format_codes_list(cls, parsed_data: list):
"""
This validator transforms the list of dictionaries from the YAML format
into a format compatible with the Pydantic model by extracting the key as 'code'.
"""
return [{"code": key, **value} for item in parsed_data for key, value in item.items()]

@pydantic.field_validator('codes', mode='after')
@classmethod
def check_executors(cls, codes):
# libEnsemble does not support multiple types of executors
# Serial python can be run with executors because it will be run by the worker directly
# if 'force_executor' is not given

executors = [c.setup.execution_type for c in codes if c.use_executor]
assert all([executors[0] == e for e in executors]), \
f"All Executors must be the same type. Executor list is: {executors}"

return codes

@property
def executor_type(self) -> 'rsopt.libe_tools.executors.EXECUTOR_TYPES':
"""libEnsemble Executor to use for any codes not run directly by workers."""
executors = [c.setup.execution_type for c in self.codes if c.use_executor]
if len(executors) > 0:
return executors[0]
return None

@property
def lower_bounds(self) -> np.ndarray:
# TODO: This is going to need handling when we have non-numeric data or require discriminating between float and int
lower_bounds = []
for code in self.codes:
for param in code.parameters:
lower_bounds.append(param.min)
lower_bounds = list(chain.from_iterable(v if isinstance(v, Iterable) else [v] for v in lower_bounds))
return np.array(lower_bounds)

@property
def upper_bounds(self) -> np.ndarray:
# TODO: This is going to need handling when we have non-numeric data or require discriminating between float and int
upper_bounds = []
for code in self.codes:
for param in code.parameters:
upper_bounds.append(param.max)
upper_bounds = list(chain.from_iterable(v if isinstance(v, Iterable) else [v] for v in upper_bounds))
return np.array(upper_bounds)

@property
def start(self) -> np.ndarray:
# TODO: This is going to need handling when we have non-numeric data or require discriminating between float and int
start = []
for code in self.codes:
for param in code.parameters:
start.append(param.start)
start = list(chain.from_iterable(v if isinstance(v, Iterable) else [v] for v in start))
return np.array(start)

@property
def samples(self) -> np.ndarray:
# TODO: This is going to need handling when we have non-numeric data or require discriminating between float and int
samples = []
for code in self.codes:
for param in code.parameters:
samples.append(param.samples)
samples = list(chain.from_iterable(v if isinstance(v, Iterable) else [v] for v in samples))
return np.array(samples)

@property
def dimension(self) -> int:
dimension = 0
for code in self.codes:
for param in code.parameters:
if hasattr(param, 'dimension'):
dimension += param.dimension
else:
dimension += 1

return dimension

@property
def rsmpi_executor(self) -> bool:
"""Is rsmpi used for any Job (code)
"""
rsmpi_used = any([c.setup.execution_type == 'rsmpi' for c in self.codes])
return rsmpi_used

def get_sym_link_list(self) -> list:
# TODO: Genesis currently handles its own symlinking which is inconsistent with the usage here
# Should be changed to use the configuration symlink method and let libEnsemble handle

sym_link_files = set()
for code in self.codes:
sym_link_files.update(code.get_sym_link_targets)
# Each flash simulation may be a unique executable and will not be in PATH
if code.code == 'flash':
sym_link_files.add(code.setup.executable)
sym_link_files.update(self.options.sym_links)

return list(sym_link_files)

class ConfigurationOptimize(ConfigurationSample):
options: _SUPPORTED_OPTIMIZER_OPTIONS = pydantic.Field(discriminator='software')
@pydantic.model_validator(mode='after')
def check_objective_function_requirement(self):
"""If the last code listed is Python and runs on the worker then an objective function is not required."""
if self.codes[-1].code == 'python':
if self.codes[-1].setup.serial_python_mode == 'worker':
return self
if self.options.objective_function is not None:
return self

raise pydantic_core.PydanticCustomError('objective_function_requirement',
'Last code is {code} not python with python_exec_type: worker ' + \
'an objective_function must be set in options: {options}.',
{'code': self.codes[-1].code, 'options': self.options}
)


def _config_discriminator(v: dict) -> str:
if v['options']['software'] in SUPPORTED_OPTIONS.get_sample_names():
return 'sample'
elif v['options']['software'] in SUPPORTED_OPTIONS.get_optimize_name():
return 'optimize'

# This is thin wrapper to get the proper Configuration class based on run mode
# So far this is really only necessary for the 'start' command which overrides the behavior of the selected software
# and can thus accept either sample or optimize configs
class Configuration(pydantic.BaseModel):
configuration: typing.Annotated[
typing.Union[typing.Annotated[ConfigurationSample, pydantic.Tag('sample')],
typing.Annotated[ConfigurationOptimize, pydantic.Tag('optimize')]
],
pydantic.Discriminator(_config_discriminator)
]
116 changes: 116 additions & 0 deletions rsopt/configuration/schemas/options.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
import abc
import pydantic
import typing
from functools import cached_property
from rsopt import util


# TODO: This will completely replace the rsopt.configuration.options.Options
# Options schema will also be removed

class SimSpecs(pydantic.BaseModel):
inputs: list[str] # = pydantic.Field(alias='in')
static_outputs: list[typing.Union[tuple[str, type], tuple[str, type, int]]] = pydantic.Field(default_factory=list)
dynamic_outputs: dict[str, tuple[str, type]] = pydantic.Field(default_factory=dict)

# _intialized_dynamic_outputs are set at run time by the Options class
_initialized_dynamic_outputs: typing.ClassVar[list[tuple[str, type, int]]] = []

@pydantic.computed_field
@property
def outputs(self) -> list:
return list(self.static_outputs + self._initialized_dynamic_outputs)

class ExitCriteria(pydantic.BaseModel):
sim_max: typing.Optional[int] = None
gen_max: typing.Optional[int] = None
wallclock_max: typing.Optional[float] = None
stop_val: typing.Optional[tuple[str, float]] = None

class SoftwareOptions(pydantic.BaseModel, abc.ABC):
pass

class Method(pydantic.BaseModel, abc.ABC):
name: str
parent_software: typing.ClassVar[str]
aposmm_support: typing.ClassVar[bool] = pydantic.Field(..., description='Method can be used by APOSMM')
local_support: typing.ClassVar[bool] = pydantic.Field(..., description='Method is a local optimization algorithm')
persis_in: typing.ClassVar[list[str]] = pydantic.Field(..., description='Typing for libEnsemble persis_in')
sim_specs: typing.ClassVar[SimSpecs] = pydantic.Field(..., description='Description of sim_specs for libEnsemble')
option_spec: typing.ClassVar[SoftwareOptions] = pydantic.Field(..., description='Software options corresponding to this method. Used by Options validator to know what to validate against.')


# TODO: Could make Options a generic to slot in supported optimizer method types for each version
class Options(pydantic.BaseModel, abc.ABC, extra='forbid'):
software: str
method: Method
nworkers: int = 2
run_dir: str = './ensemble/'
record_interval: pydantic.PositiveInt = 0
save_every_k_sims: int = pydantic.Field(default=1, alias='record_interval')
use_worker_dirs: bool = True
sim_dirs_make: bool = True
output_file: str = ''
copy_final_logs: bool = True
sym_links: list[typing.Union[pydantic.FilePath, pydantic.DirectoryPath]] = pydantic.Field(default_factory=list)
objective_function: tuple[pydantic.FilePath, str] = pydantic.Field(default=None)
seed: typing.Union[None, str, int] = ''
software_options: SoftwareOptions
load_models_at_startup: bool = False

# TODO: This could end up being its own model
executor_options: dict = pydantic.Field(default_factory=dict)
use_zero_resources: bool = pydantic.Field(default=True, frozen=True, exclude=True)

@pydantic.field_validator('method', mode='before')
@classmethod
def set_method_name(cls, v):
return {'name': v}


@pydantic.model_validator(mode='after')
def initialize_dynamic_outputs(self):
for param, output_type in self.method.sim_specs.dynamic_outputs.items():
if hasattr(self, param):
size = getattr(self, param)
elif hasattr(self.software_options, param):
size = getattr(self.software_options, param)
else:
raise AttributeError(f"{param} not a member of {self}")
self.method.sim_specs._initialized_dynamic_outputs.append(
output_type + (size,)
)

return self

@cached_property
def instantiated_objective_function(self) -> typing.Callable or None:
if self.objective_function is not None:
module_path, function_name = self.objective_function
module = util.run_path_as_module(module_path)
function = getattr(module, function_name)
return function
return None

@pydantic.model_validator(mode="before")
@classmethod
def validate_software_options(cls, values):
"""Use method to find the expected model that defines the corresponding software_options and then validate
user input to software_options."""
method = values.get('method')
software_options = values.get('software_options')
allowed_options = cls.model_fields['method'].annotation
valid_options = {v.model_fields['name'].default: v.option_spec for v in typing.get_args(allowed_options)}

if method and software_options:
expected_class = valid_options.get(method)
if expected_class:
if not isinstance(software_options, dict):
raise ValueError("software_options must be provided as a dictionary")
values['software_options'] = expected_class(**software_options)

return values


class OptionsExit(Options):
exit_criteria: ExitCriteria
94 changes: 94 additions & 0 deletions rsopt/configuration/schemas/parameters.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
import enum
from typing import Literal, Optional, Union, List
import pydantic
import numpy as np

class ParameterClasses(str, enum.Enum):
REPEATED = "repeated"
NUMERICAL = "numeric"
CATEGORICAL = "category"

def parameter_discriminator(v: dict) -> str:
"""Identifies which subclass of Parameter v belongs to.
Args:
v: (dict)
Returns: (str) Tag value for discriminator
"""
if 'dimension' in v.keys():
return ParameterClasses.REPEATED
if 'min' in v.keys():
return ParameterClasses.NUMERICAL
elif 'values' in v.keys():
return ParameterClasses.CATEGORICAL


class Parameter(pydantic.BaseModel):
name: str = pydantic.Field(description='User specified name or the parameter. May include formatting to give attribute and index.')
item_name: str = pydantic.Field('', exclude=True, description='Internal usage. Parsed name to get just the item name.')
item_attribute: str = ''
item_index: int = 0
group: Optional[str or int] = None

@pydantic.model_validator(mode="before")
@classmethod
def parse_name(cls, values):
"""Try to split name into item name, item attribute, item index.
Parses rsopt's string formatting for specifying model command/element names and attributes
`command-or-element-name.[command-or-element-attribute].[command-index]`
If command name or element name includes '.' this formatting is not usable. As an alternative the user
can use the item
"""
if 'item_attribute' in values:
assert 'item_index' in values, (f"Error for parameter {values['name']}. `item_index` must be set "
f"when explicitly setting `item_attribute`")
values['item_name'] = values['name']
return values
if 'name' in values:
for part, typing in zip(values['name'].split('.'), ('item_name', 'item_attribute', 'item_index')):
values[typing] = part

return values


class NumericParameter(Parameter):
# TODO: Type of all must match
min: Union[int, float]
max: Union[int, float]
start: Union[int, float]
# TODO: Checking requirement means looking at Options
samples: int = 1
scale: Union[Literal['linear'], Literal['log']] = 'linear'

# Cannot subclass NumericParameter or RepeatedNumericParameter will use the min/max/start fields and not the property
# versions defined in RepeatedNumericParameter
class RepeatedNumericParameter(Parameter):
dimension: int
min_setting: Union[int, float] = pydantic.Field(..., alias='min')
max_setting: Union[int, float] = pydantic.Field(..., alias='max')
start_setting: Union[int, float] = pydantic.Field(..., alias='start')
# TODO: Making all these have the same number of samples, could make an option to do in or list[int] to provide
# varying numbers of samples by dimension
samples_setting: int = pydantic.Field(1, alias='samples')
scale: Union[Literal['linear'], Literal['log']] = 'linear'
@property
def min(self):
return np.array([self.min_setting,] * self.dimension)
@property
def max(self):
return np.array([self.max_setting,] * self.dimension)
@property
def start(self):
return np.array([self.start_setting,] * self.dimension)
@property
def samples(self):
return np.array([self.samples_setting,] * self.dimension)


class CategoryParameter(Parameter):
values: List[Union[int, float, str]]
Loading