Performing parallel / batch processing of simulations#

diagram

Note: the cost of running the entire notebook is larger than 1 FlexCredit.

In this notebook, we will show an example of using tidy3d to evaluate device performance over a set of many design parameters.

This example will also provide a walkthrough of Tidy3D’s Job and Batch features for managing both individual simulations and sets of simulations.

Note: as of version 1.8, the tidy3d.web.run_async function handles the same functionality as the batch, with a simpler syntax. As such, it could be a good alternative for parameter scan depending on how your script is set up.

Additionally, we will show how to do this problem using tidy3d.plugins.design, which provides convenience methods for defining and running parameter scans, which has a full tutorial here.

For demonstration, we look at the splitting ratio of a directional coupler as we vary the coupling length between two waveguides. The sidewall of the waveguides is slanted, deviating from the vertical direction by sidewall_angle.

[1]:
# standard python imports
import numpy as np
import matplotlib.pyplot as plt
import os
import gdstk

# tidy3D imports
import tidy3d as td
from tidy3d import web

Setup#

First we set up some global parameters

[2]:
# wavelength / frequency
lambda0 = 1.550  # all length scales in microns
freq0 = td.constants.C_0 / lambda0
fwidth = freq0 / 10

# Permittivity of waveguide and substrate
wg_n = 3.48
sub_n = 1.45
mat_wg = td.Medium(permittivity=wg_n**2)
mat_sub = td.Medium(permittivity=sub_n**2)

# Waveguide dimensions

# Waveguide height
wg_height = 0.22
# Waveguide width
wg_width = 0.45
# Waveguide separation in the beginning/end
wg_spacing_in = 8
# Reference plane where the cross section of the device is defined
reference_plane = "bottom"
# Angle of the sidewall deviating from the vertical ones, positive values for the base larger than the top
sidewall_angle = np.pi / 6
# Total device length along propagation direction
device_length = 100
# Length of the bend region
bend_length = 16
# space between waveguide and PML
pml_spacing = 1
# resolution control: minimum number of grid cells per wavelength in each material
grid_cells_per_wvl = 16

Define waveguide bends and coupler#

Here is where we define our directional coupler shape programmatically in terms of the geometric parameters

[3]:
def tanh_interp(max_arg):
    """Interpolator for tanh with adjustable extension"""
    scale = 1 / np.tanh(max_arg)
    return lambda u: 0.5 * (1 + scale * np.tanh(max_arg * (u * 2 - 1)))


def make_coupler(
    length,
    wg_spacing_in,
    wg_width,
    wg_spacing_coup,
    coup_length,
    bend_length,
):
    """Make an integrated coupler using the gdstk RobustPath object."""
    # bend interpolator
    interp = tanh_interp(3)
    delta = wg_width + wg_spacing_coup - wg_spacing_in
    offset = lambda u: wg_spacing_in + interp(u) * delta

    coup = gdstk.RobustPath(
        (-0.5 * length, 0),
        (wg_width, wg_width),
        wg_spacing_in,
        simple_path=True,
        layer=1,
        datatype=[0, 1],
    )
    coup.segment((-0.5 * coup_length - bend_length, 0))
    coup.segment(
        (-0.5 * coup_length, 0),
        offset=[lambda u: -0.5 * offset(u), lambda u: 0.5 * offset(u)],
    )
    coup.segment((0.5 * coup_length, 0))
    coup.segment(
        (0.5 * coup_length + bend_length, 0),
        offset=[lambda u: -0.5 * offset(1 - u), lambda u: 0.5 * offset(1 - u)],
    )
    coup.segment((0.5 * length, 0))
    return coup

Create Simulation and Submit Job#

The following function creates a tidy3d simulation object for a set of design parameters.

Note that the simulation has not been run yet, just created.

[4]:
def make_sim(coup_length, wg_spacing_coup, domain_field=False):
    """Make a simulation with a given length of the coupling region and
    distance between the waveguides in that region. If ``domain_field``
    is True, a 2D in-plane field monitor will be added.
    """

    # Geometry must be placed in GDS cells to import into Tidy3D
    coup_cell = gdstk.Cell("Coupler")

    substrate = gdstk.rectangle(
        (-device_length / 2, -wg_spacing_in / 2 - 10),
        (device_length / 2, wg_spacing_in / 2 + 10),
        layer=0,
    )
    coup_cell.add(substrate)

    # Add the coupler to a gdstk cell
    gds_coup = make_coupler(
        device_length,
        wg_spacing_in,
        wg_width,
        wg_spacing_coup,
        coup_length,
        bend_length,
    )
    coup_cell.add(gds_coup)

    # Substrate
    (oxide_geo,) = td.PolySlab.from_gds(
        gds_cell=coup_cell,
        gds_layer=0,
        gds_dtype=0,
        slab_bounds=(-10, 0),
        reference_plane=reference_plane,
        axis=2,
    )

    oxide = td.Structure(geometry=oxide_geo, medium=mat_sub)

    # Waveguides (import all datatypes if gds_dtype not specified)
    coupler1_geo, coupler2_geo = td.PolySlab.from_gds(
        gds_cell=coup_cell,
        gds_layer=1,
        slab_bounds=(0, wg_height),
        sidewall_angle=sidewall_angle,
        reference_plane=reference_plane,
        axis=2,
    )

    coupler1 = td.Structure(geometry=coupler1_geo, medium=mat_wg)

    coupler2 = td.Structure(geometry=coupler2_geo, medium=mat_wg)

    # Simulation size along propagation direction
    sim_length = 2 + 2 * bend_length + coup_length

    # Spacing between waveguides and PML
    sim_size = [
        sim_length,
        wg_spacing_in + wg_width + 2 * pml_spacing,
        wg_height + 2 * pml_spacing,
    ]

    # source
    src_pos = -sim_length / 2 + 0.5
    msource = td.ModeSource(
        center=[src_pos, wg_spacing_in / 2, wg_height / 2],
        size=[0, 3, 2],
        source_time=td.GaussianPulse(freq0=freq0, fwidth=fwidth),
        direction="+",
        mode_spec=td.ModeSpec(),
        mode_index=0,
    )

    mon_in = td.ModeMonitor(
        center=[(src_pos + 0.5), wg_spacing_in / 2, wg_height / 2],
        size=[0, 3, 2],
        freqs=[freq0],
        mode_spec=td.ModeSpec(),
        name="in",
    )
    mon_ref_bot = td.ModeMonitor(
        center=[(src_pos + 0.5), -wg_spacing_in / 2, wg_height / 2],
        size=[0, 3, 2],
        freqs=[freq0],
        mode_spec=td.ModeSpec(),
        name="refect_bottom",
    )
    mon_top = td.ModeMonitor(
        center=[-(src_pos + 0.5), wg_spacing_in / 2, wg_height / 2],
        size=[0, 3, 2],
        freqs=[freq0],
        mode_spec=td.ModeSpec(),
        name="top",
    )
    mon_bot = td.ModeMonitor(
        center=[-(src_pos + 0.5), -wg_spacing_in / 2, wg_height / 2],
        size=[0, 3, 2],
        freqs=[freq0],
        mode_spec=td.ModeSpec(),
        name="bottom",
    )
    monitors = [mon_in, mon_ref_bot, mon_top, mon_bot]

    if domain_field == True:
        domain_monitor = td.FieldMonitor(
            center=[0, 0, wg_height / 2],
            size=[td.inf, td.inf, 0],
            freqs=[freq0],
            name="field",
        )
        monitors.append(domain_monitor)

    # initialize the simulation
    sim = td.Simulation(
        size=sim_size,
        grid_spec=td.GridSpec.auto(min_steps_per_wvl=grid_cells_per_wvl),
        structures=[oxide, coupler1, coupler2],
        sources=[msource],
        monitors=monitors,
        run_time=50 / fwidth,
        boundary_spec=td.BoundarySpec.all_sides(boundary=td.PML()),
    )

    return sim

Inspect Simulation#

Let’s create and inspect a single simulation to make sure it was defined correctly before doing the full scan. The sidewalls of the waveguides deviate from the vertical direction by 30 degree. We also add an in-plane field monitor to have a look at the fields evolution in this one simulation. We will not use such a monitor in the batch to avoid storing unnecesarrily large amounts of data.

[5]:
# Length of the coupling region
coup_length = 10

# Waveguide separation in the coupling region
wg_spacing_coup = 0.10

sim = make_sim(coup_length, wg_spacing_coup, domain_field=True)

[6]:
# visualize geometry
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 4))
sim.plot(z=wg_height / 2 + 0.01, ax=ax1)
sim.plot(x=0.1, ax=ax2)
ax2.set_xlim([-3, 3])
plt.show()

../_images/notebooks_ParameterScan_10_0.png

Create and Submit Job#

The Job object provides an interface for managing simulations.

job = Job(simulation) will create a job and upload the simulation to our server to run.

Then, one may call various methods of job to monitor progress, download results, and get information.

For more information, refer to the API reference.

[7]:
# create job, upload sim to server to begin running
job = web.Job(simulation=sim, task_name="CouplerVerify", verbose=True)

# download the results and load them into a simulation
sim_data = job.run(path="data/sim_data.hdf5")

12:49:14 EST Created task 'CouplerVerify' with task_id
             'fdve-4b5f2a10-dddd-4b73-888e-730823fcc6e2' and task_type 'FDTD'.
12:49:16 EST status = queued
12:49:28 EST status = preprocess
12:49:37 EST Maximum FlexCredit cost: 0.570. Use 'web.real_cost(task_id)' to get
             the billed FlexCredit cost after a simulation run.
             starting up solver
             running solver
             To cancel the simulation, use 'web.abort(task_id)' or
             'web.delete(task_id)' or abort/delete the task in the web UI.
             Terminating the Python script will not stop the job running on the
             cloud.
12:50:03 EST early shutoff detected at 52%, exiting.
             status = postprocess
12:50:08 EST status = success
12:50:13 EST loading simulation from data/sim_data.hdf5

Postprocessing#

The following function takes a completed simulation (with data loaded into it) and computes the quantities of interest.

For this case, we measure both the total transmission in the right ports and also the ratio of power between the top and bottom ports.

[8]:
def measure_transmission(sim_data):
    """Constructs a "row" of the scattering matrix when sourced from top left port"""

    input_amp = sim_data["in"].amps.sel(direction="+")

    amps = np.zeros(4, dtype=complex)
    directions = ("-", "-", "+", "+")
    for i, (monitor, direction) in enumerate(
        zip(sim_data.simulation.monitors[:4], directions)
    ):
        amp = sim_data[monitor.name].amps.sel(direction=direction)
        amp_normalized = amp / input_amp
        amps[i] = np.squeeze(amp_normalized.values)

    return amps

[9]:
# monitor and test out the measure_transmission function the results of the single run
amps_arms = measure_transmission(sim_data)
print("mode amplitudes in each port:\n")
for amp, monitor in zip(amps_arms, sim_data.simulation.monitors[:-1]):
    print(f'\tmonitor     = "{monitor.name}"')
    print(f"\tamplitude^2 = {abs(amp)**2:.2f}")
    print(f"\tphase       = {(np.angle(amp)):.2f} (rad)\n")

mode amplitudes in each port:

        monitor     = "in"
        amplitude^2 = 0.00
        phase       = -2.01 (rad)

        monitor     = "refect_bottom"
        amplitude^2 = 0.00
        phase       = 0.76 (rad)

        monitor     = "top"
        amplitude^2 = 0.95
        phase       = -0.37 (rad)

        monitor     = "bottom"
        amplitude^2 = 0.04
        phase       = 1.20 (rad)

[10]:
fig, ax = plt.subplots(1, 1, figsize=(16, 3))
sim_data.plot_field("field", "Ey", z=wg_height / 2, freq=freq0, ax=ax)
plt.show()

             WARNING: 'freq' supplied to 'plot_field', frequency selection key  
             renamed to 'f' and 'freq' will error in future release, please     
             update your local script to use 'f=value'.                         
../_images/notebooks_ParameterScan_16_1.png

1D Parameter Scan#

Now we will scan through the coupling length parameter to see the effect on splitting ratio.

To do this, we will create a list of simulations corresponding to each parameter combination.

We will use this list to create a Batch object, which has similar functionality to Job but allows one to manage a set of jobs.

First, we create arrays to store the input and output values.

[11]:
# create variables to store parameters, simulation information, results
Nl = 11

ls = np.linspace(5, 12, Nl)
split_ratios = np.zeros(Nl)
efficiencies = np.zeros(Nl)

Create Batch#

We now create our list of simulations and use them to initialize a Batch.

For more information, refer to the API reference.

[12]:
# submit all jobs
sims = {f"l={l:.2f}": make_sim(l, wg_spacing_coup) for l in ls}
batch = web.Batch(simulations=sims, verbose=True)

12:50:15 EST Created task 'l=5.00' with task_id
             'fdve-524998bb-fd65-4d7f-9690-bf23fa526eee' and task_type 'FDTD'.
             Created task 'l=5.70' with task_id
             'fdve-aa5d87d4-6464-4a7a-a382-1839b7579d8f' and task_type 'FDTD'.
12:50:16 EST Created task 'l=6.40' with task_id
             'fdve-bc022117-32cf-4937-9535-c316c4026c5c' and task_type 'FDTD'.
12:50:17 EST Created task 'l=7.10' with task_id
             'fdve-531e51e6-a323-4837-9ec2-e90785405201' and task_type 'FDTD'.
12:50:18 EST Created task 'l=7.80' with task_id
             'fdve-ce1df67c-c871-4617-be23-321c7f69d9fe' and task_type 'FDTD'.
             Created task 'l=8.50' with task_id
             'fdve-a0783855-ed01-4716-8b72-a54f7de5bd5f' and task_type 'FDTD'.
12:50:19 EST Created task 'l=9.20' with task_id
             'fdve-c99ce10d-2ba1-4164-ac7c-09d756277bfc' and task_type 'FDTD'.
12:50:20 EST Created task 'l=9.90' with task_id
             'fdve-3312bcf5-19f4-4383-bed1-a9152f02b6af' and task_type 'FDTD'.
12:50:21 EST Created task 'l=10.60' with task_id
             'fdve-6843267f-7c24-437e-8c12-6ba077b6d317' and task_type 'FDTD'.
             Created task 'l=11.30' with task_id
             'fdve-b2f6ec76-f236-4827-944d-ba3797e5e1df' and task_type 'FDTD'.
12:50:22 EST Created task 'l=12.00' with task_id
             'fdve-54c478b4-00e7-4be2-a41a-4d0bc1850591' and task_type 'FDTD'.

Monitor Batch#

Here we can perform real-time monitoring of how many of the jobs in the batch have completed.

[13]:
batch_results = batch.run(path_dir="data")

12:50:29 EST Started working on Batch.
12:50:35 EST Maximum FlexCredit cost: 6.043 for the whole batch.
             Use 'Batch.real_cost()' to get the billed FlexCredit cost after the
             Batch has completed.
12:54:53 EST Batch complete.

Load and visualize Results#

Finally, we can compute the output quantities and load them into the arrays we created initally.

Then we may plot the results.

[14]:
amps_batch = []
for task_name, sim_data in batch_results.items():
    amps_arms_i = np.array(measure_transmission(sim_data))
    amps_batch.append(amps_arms_i)
amps_batch = np.stack(amps_batch, axis=1)
print(amps_batch.shape)  # (4, Nl)
print(amps_batch)

12:54:55 EST loading simulation from
             data/fdve-524998bb-fd65-4d7f-9690-bf23fa526eee.hdf5
12:54:56 EST loading simulation from
             data/fdve-aa5d87d4-6464-4a7a-a382-1839b7579d8f.hdf5
12:54:57 EST loading simulation from
             data/fdve-bc022117-32cf-4937-9535-c316c4026c5c.hdf5
12:54:59 EST loading simulation from
             data/fdve-531e51e6-a323-4837-9ec2-e90785405201.hdf5
12:55:00 EST loading simulation from
             data/fdve-ce1df67c-c871-4617-be23-321c7f69d9fe.hdf5
12:55:01 EST loading simulation from
             data/fdve-a0783855-ed01-4716-8b72-a54f7de5bd5f.hdf5
12:55:02 EST loading simulation from
             data/fdve-c99ce10d-2ba1-4164-ac7c-09d756277bfc.hdf5
12:55:03 EST loading simulation from
             data/fdve-3312bcf5-19f4-4383-bed1-a9152f02b6af.hdf5
12:55:04 EST loading simulation from
             data/fdve-6843267f-7c24-437e-8c12-6ba077b6d317.hdf5
12:55:05 EST loading simulation from
             data/fdve-b2f6ec76-f236-4827-944d-ba3797e5e1df.hdf5
12:55:06 EST loading simulation from
             data/fdve-54c478b4-00e7-4be2-a41a-4d0bc1850591.hdf5
(4, 11)
[[-6.87784211e-03+1.54660657e-04j  2.80311781e-03-3.24439293e-03j
  -6.05798394e-03-1.35237882e-03j -1.85186978e-05+3.25048044e-03j
  -2.64649910e-03-1.04171487e-02j -3.93470514e-03+3.56730872e-03j
   7.28989029e-04-3.99204628e-03j -4.50742482e-03-8.20557869e-04j
   6.48167710e-04-2.07853885e-03j -4.69987509e-03-1.10109949e-03j
   4.02342104e-04+2.27009310e-03j]
 [ 7.61169474e-03-1.42971967e-03j -3.55052111e-03+3.17363588e-03j
   2.74310296e-03+2.55130465e-03j  1.24109835e-03+8.99221784e-04j
   1.17502778e-03+8.00563039e-03j  2.06976825e-03-2.68420995e-03j
  -1.63047384e-03+8.07344960e-03j  4.17844342e-03-1.16865294e-03j
  -2.48879395e-03+2.07880575e-03j  2.59994482e-03+7.49691152e-03j
  -4.69576211e-04-3.79743310e-03j]
 [ 4.46993092e-01+4.22390206e-01j  6.76810526e-01-2.61135427e-01j
   2.86602132e-02-8.19184341e-01j -8.13016667e-01-3.75172010e-01j
  -7.02821261e-01+6.40092096e-01j  3.28937775e-01+9.27132632e-01j
   9.91167879e-01+6.25125358e-02j  4.60587075e-01-8.64568534e-01j
  -5.90519381e-01-7.34380059e-01j -8.50719103e-01+2.34732714e-01j
  -9.98960644e-02+7.95666960e-01j]
 [ 5.36557161e-01-5.62449290e-01j -2.40363431e-01-6.31717151e-01j
  -5.57351344e-01-2.21605292e-02j -1.80626548e-01+3.86325114e-01j
   1.90656719e-01+2.11408323e-01j  1.28453934e-01-4.50059344e-02j
  -9.56959053e-04+1.63353895e-02j  1.47600306e-01+7.88053126e-02j
   2.45300517e-01-1.97485114e-01j -1.21780289e-01-4.38416165e-01j
  -5.80260287e-01-7.11656809e-02j]]
[15]:
powers = abs(amps_batch) ** 2
power_top = powers[2]
power_bot = powers[3]
power_out = power_top + power_bot

[16]:
plt.plot(ls, 100 * power_top, label="Top")
plt.plot(ls, 100 * power_bot, label="Bottom")
plt.plot(ls, 100 * power_out, label="Top + Bottom")
plt.xlabel("Coupling length (Β΅m)")
plt.ylabel("Power ratio (%)")
plt.ylim(0, 100)
plt.legend()
plt.show()

../_images/notebooks_ParameterScan_26_0.png

Final Remarks#

Batches provide some other convenient functionality for managing large numbers of jobs.

For example, one can save the batch information to file and load the batch at a later time, if needing to disconnect from the service while the jobs are running.

[17]:
# save batch metadata
batch.to_file("data/batch_data.json")

# load batch metadata into a new batch
loaded_batch = web.Batch.from_file("data/batch_data.json")

For more reference, refer to our documentation.

Using the design Plugin#

In tidy3d version 2.6.0, we introduced a design plugin, which allows users to programmatically define and run their parameter scans while also providing a convenient container for managing the resulting data.

For more details, please refer to our full tutorial on the sweep plugin.

We import the sweep plugin through tidy3d.plugins.design below.

[18]:
import tidy3d.plugins.design as tdd

Then we define our sweep dimensions as tdd.ParameterFloat instances and give them each a range.

Note: we need to ensure that the name arguments match the function argument names in our make_sim() function, which we will use to construct the simulations for the parameter scan.

[19]:
param_spc = tdd.ParameterFloat(name="wg_spacing_coup", span=(0.1, 0.15), num_points=3)
param_len = tdd.ParameterFloat(name="coup_length", span=(5, 12), num_points=3)

For this example, we will do a grid search over these points, which we can define a tdd.MethodGrid and then combine everything into a tdd.DesignSpace.

[21]:
method = tdd.MethodGrid()
design_space = tdd.DesignSpace(parameters=[param_spc, param_len], method=method)

To run the sweep, we need to pass the design space our evaluation function. Here we will define it through pre and post processing functions, which enables the design plugin to take advantage of parallelism to perform Batch processing under the hood.

The functions make_sim and measure_transmission already almost work perfectly as pre and post processing functions. We’ll just wrap measure_transmission to return a dictionary of the amplitudes, so that the dictionary keys can be used to label the outputs in the resulting dataset.

[22]:
def fn_post(*args, **kwargs):
    """Post processing function, but label the outputs using a dictionary."""
    a, b, c, d = measure_transmission(*args, **kwargs)
    return dict(input=a, reflect_bottom=b, top=c, bottom=d)

Finally, we pass our pre-processing and post-processing functions to the DesignSpace.run_batch() method to get our results.

[23]:
results = design_space.run_batch(fn_pre=make_sim, fn_post=fn_post)
12:58:12 EST Created task '{'wg_spacing_coup': 0.1, 'coup_length': 5.0}' with 
             task_id 'fdve-99a49ca3-7686-47df-9f24-fba31a52a5b1' and task_type
             'FDTD'.
12:58:13 EST Created task '{'wg_spacing_coup': 0.125, 'coup_length': 5.0}' with 
             task_id 'fdve-34ca21cb-13df-4f53-bfec-ecd6b8a427e0' and task_type
             'FDTD'.
12:58:14 EST Created task '{'wg_spacing_coup': 0.15, 'coup_length': 5.0}' with 
             task_id 'fdve-449faaf1-f487-4503-8aef-02a0f81a74c3' and task_type
             'FDTD'.
12:58:15 EST Created task '{'wg_spacing_coup': 0.1, 'coup_length': 8.5}' with 
             task_id 'fdve-f025ccb3-ea9a-48f2-859a-f58d711e6580' and task_type
             'FDTD'.
12:58:16 EST Created task '{'wg_spacing_coup': 0.125, 'coup_length': 8.5}' with 
             task_id 'fdve-58748a96-a95c-4967-9cff-265b397330df' and task_type
             'FDTD'.
12:58:17 EST Created task '{'wg_spacing_coup': 0.15, 'coup_length': 8.5}' with 
             task_id 'fdve-85cfb419-f085-4d74-be9d-d2607c567421' and task_type
             'FDTD'.
12:58:18 EST Created task '{'wg_spacing_coup': 0.1, 'coup_length': 12.0}' with 
             task_id 'fdve-7bcd743f-bfb4-4bc0-9aba-f0cd77908413' and task_type
             'FDTD'.
             Created task '{'wg_spacing_coup': 0.125, 'coup_length': 12.0}' with
             task_id 'fdve-5e88a0be-0907-4a85-a2f4-ee15d2dec0fc' and task_type
             'FDTD'.
12:58:19 EST Created task '{'wg_spacing_coup': 0.15, 'coup_length': 12.0}' with 
             task_id 'fdve-c10d7e2b-2e28-4d1d-91a4-dc62207e30e1' and task_type
             'FDTD'.
12:58:25 EST Started working on Batch.
12:58:29 EST Maximum FlexCredit cost: 4.932 for the whole batch.
             Use 'Batch.real_cost()' to get the billed FlexCredit cost after the
             Batch has completed.
13:02:45 EST Batch complete.
13:02:47 EST loading simulation from
             ./fdve-99a49ca3-7686-47df-9f24-fba31a52a5b1.hdf5
13:02:48 EST loading simulation from
             ./fdve-34ca21cb-13df-4f53-bfec-ecd6b8a427e0.hdf5
             loading simulation from
             ./fdve-449faaf1-f487-4503-8aef-02a0f81a74c3.hdf5
13:02:49 EST loading simulation from
             ./fdve-f025ccb3-ea9a-48f2-859a-f58d711e6580.hdf5
13:02:50 EST loading simulation from
             ./fdve-58748a96-a95c-4967-9cff-265b397330df.hdf5
13:02:51 EST loading simulation from
             ./fdve-85cfb419-f085-4d74-be9d-d2607c567421.hdf5
13:02:52 EST loading simulation from
             ./fdve-7bcd743f-bfb4-4bc0-9aba-f0cd77908413.hdf5
13:02:53 EST loading simulation from
             ./fdve-5e88a0be-0907-4a85-a2f4-ee15d2dec0fc.hdf5
13:02:54 EST loading simulation from
             ./fdve-c10d7e2b-2e28-4d1d-91a4-dc62207e30e1.hdf5

Let’s get a pandas.DataFrame of the results and print out the first 5 elements.

[24]:
df = results.to_dataframe()

# take absolute value squared of output 2 to get powers to the top port
df['amp_squared_top'] = df['top'].map(lambda x: abs(x)**2)

df.head()
[24]:
wg_spacing_coup coup_length input reflect_bottom top bottom amp_squared_top
0 0.100 5.0 -0.006878+0.000155j 0.007612-0.001430j 0.446993+0.422390j 0.536557-0.562449j 0.378216
1 0.125 5.0 -0.011013-0.001935j 0.009946-0.000710j 0.287411+0.180854j 0.500121-0.786399j 0.115313
2 0.150 5.0 -0.003424+0.001328j 0.008498+0.000748j 0.075504+0.033292j 0.413470-0.898602j 0.006809
3 0.100 8.5 -0.003934+0.003567j 0.002070-0.002684j 0.328938+0.927133j 0.128454-0.045006j 0.967775
4 0.125 8.5 0.002186+0.004294j -0.003497-0.002464j 0.465280+0.695997j 0.444890-0.297377j 0.700896

And we can also use DataFrame visualization methods, documented here.

[25]:
df.plot.scatter(x="wg_spacing_coup", y="coup_length", c="amp_squared_top", s=500)
plt.show()
../_images/notebooks_ParameterScan_43_0.png

If you are interested in this approach to parmaeter sweeps, we highly recommend checking our our design plugin tutorial for a deep dive and also the documentation for pandas.DataFrame.

Parameter Scan using Signac#

For users who might need more structure to their parameter scans, the open source tool β€œsignac” is a fantastic resource. Here we will reproduce some of the previous parameter scan using signac to give an introduction to how to apply it to Tidy3D projects. You can see detailed tutorials and examples at their documentation page.

After importing the package, we need to define a project, which also tells signac to store our parameter scan data in a local directory of our choice.

[26]:
import signac

# make the project and give it a path to save the data into
project = signac.init_project(path="data/coupler")

With siganc (and more generally in parameter sweeps), it is very useful to have a single function to define the inputs and outputs of your parameter scan. In our case, we have a single input (coupling length l) and two outputs: coupling efficiency (eff) and the split ratio (ratio). We write a function to link these inputs and outputs through a Tidy3D simulation.

[27]:
def compute(l: float):
    """compute efficiency and split ratio as a function of the coupling length"""
    sim = make_sim(l, wg_spacing_coup)
    task_name = f"SWEEP_l={l:.3f}"
    sim_data = web.run(sim, task_name=task_name, verbose=False)
    amps_arms = np.array(measure_transmission(sim_data))
    powers = np.abs(amps_arms)**2
    efficiency = np.sum(powers)
    ratio_0 = powers[2] / efficiency
    return efficiency, ratio_0

The signac project contains a set of job objects, which each define a specific data point of our parameter scan. As such, we define another function that wraps compute but instead simply accepts a job instance, which makes our lives easier for managing parameter sweeps in signac.

This function basically will open a job, grab its inputs (l value in our case), compute the output quantities, and save them to both the job.document record and also the .txt files storing the parameter scan results.

[28]:
def compute_transmission(job):
    l = job.statepoint()["l"]
    print(f"Computing transmission of job: [l={l:.1f}]", job)
    eff, ratio = compute(**job.statepoint())
    job.document["eff"] = float(eff)
    job.document["ratio"] = float(ratio)
    with open(job.fn("eff.txt"), "w") as file:
        file.write(str(float(eff)) + "\n")
    with open(job.fn("ratio.txt"), "w") as file:
        file.write(str(float(ratio)) + "\n")

With this, we can start our parameter scan, we simply loop through our desired l values, construct a job for each, and pass it to our compute_transmission function.

Note: If intermediate simulation results are needed, it should be possible to modify compute to return also the SimulationData associated with our task, which we could then store in the job.document or elsewhere.

We will compute only 3 values between our ranges of 5 and 12 to save time.

Note, the job instances each have their own unique ID, similar to Tidy3D task_id, this is what is being printed below with each job being computed.

[29]:
for l in np.linspace(5, 12, 3):
    statepoint = {"l": float(l)}
    job = project.open_job(statepoint)
    compute_transmission(job)
Computing transmission of job: [l=5.0] abb6bcf88ee5474914d016b3c1fcc7b3
Computing transmission of job: [l=8.5] c261dda788d2af5c1645cb17feb7854a
Computing transmission of job: [l=12.0] bc0f33313f39cc49b60566dcb7f169f9

Let’s take a look at our project and what data we’ve computed and stored so far.

[30]:
for job in project:
    print(job.statepoint(), job.document)
{'l': 8.5} {'eff': 0.9863407492968844, 'ratio': 0.9811772537321698}
{'l': 7.625} {'eff': 0.9848879657314651, 'ratio': 0.8947585604933013}
{'l': 5.0} {'eff': 0.9825664117220569, 'ratio': 0.3849269684345477}
{'l': 6.75} {'eff': 0.9824395884480177, 'ratio': 0.7526832110290623}
{'l': 11.125} {'eff': 0.9864517179631355, 'ratio': 0.8200637938272712}
{'l': 5.875} {'eff': 0.9825395966097096, 'ratio': 0.5741903063499177}
{'l': 10.25} {'eff': 0.9870632007683607, 'ratio': 0.9402367369783092}
{'l': 9.375} {'eff': 0.9873992735509358, 'ratio': 0.9971435263562974}
{'l': 12.0} {'eff': 0.9848516730552885, 'ratio': 0.6529562770748548}

signac also provides the ability to create a β€œlinked view”, which gives us a human-readable filesystem for looking at our jobs.

[31]:
project.create_linked_view(prefix="data/coupler/view")

# let's print out some info stored in the directory we just created
!ls data/coupler/view/l/
!ls data/coupler/view/l/5.0/job/
!cat data/coupler/view/l/5.0/job/eff.txt
10.25  11.125 12.0   5.0    5.875  6.75   7.625  8.5    9.375
eff.txt                  signac_job_document.json
ratio.txt                signac_statepoint.json
0.9825664117220569

We can also initialize many data points to compute later, and feed them to the compute_transmission function with a very basic β€œcaching” implemented by checking if our inputs already exist in the record.

[32]:
def init_statepoints(n):
    for l in np.linspace(5, 12, n):
        sp = {"l": float(l)}
        job = project.open_job(sp)
        job.init()
        print("initialize", job)

# make 5 points between 5 and 12, note, 3 have already been computed
init_statepoints(5)
initialize abb6bcf88ee5474914d016b3c1fcc7b3
initialize 07222394eb1a1b59f9e869ef871a3c29
initialize c261dda788d2af5c1645cb17feb7854a
initialize bdbbc3c4500db5773fa3cd350b6195ed
initialize bc0f33313f39cc49b60566dcb7f169f9

After initializing our statepoints (input values), they are stored in our project and we can loop through them and compute any that don’t have records.

[33]:
for job in project:
    if "eff" not in job.document or "ratio" not in job.document:
        compute_transmission(job)
    else:
        print(f" - skipping job: {job} ")
 - skipping job: c261dda788d2af5c1645cb17feb7854a
 - skipping job: b22f487baf2db695e6b15444d42e31ff
 - skipping job: abb6bcf88ee5474914d016b3c1fcc7b3
 - skipping job: 07222394eb1a1b59f9e869ef871a3c29
 - skipping job: 9af2b1986858046540a9da68dbf4f425
 - skipping job: dca27bcde6b7bfede0d701809749e789
 - skipping job: bdbbc3c4500db5773fa3cd350b6195ed
 - skipping job: c7a1813a4fd1fd835898df062c3262c9
 - skipping job: bc0f33313f39cc49b60566dcb7f169f9

While we used td.web.Batch in our original example, signac lets us also leverage parallel processing tools in python to perform something similar.

Let’s initialize 9 total statepoints now (5 have already been computed) and feed them to a ThreadPool. We notice that the jobs will be computed in parallel depending on how many threads are available on your machine.

[34]:
init_statepoints(9)

from multiprocessing.pool import ThreadPool

# make a convenience function to just call compute_transmission only for uncomputed jobs
def compute_transmission_cached(job):
    if "eff" not in job.document or "ratio" not in job.document:
        compute_transmission(job)

with ThreadPool() as pool:
    pool.map(compute_transmission_cached, list(project))
initialize abb6bcf88ee5474914d016b3c1fcc7b3
initialize dca27bcde6b7bfede0d701809749e789
initialize 07222394eb1a1b59f9e869ef871a3c29
initialize b22f487baf2db695e6b15444d42e31ff
initialize c261dda788d2af5c1645cb17feb7854a
initialize c7a1813a4fd1fd835898df062c3262c9
initialize bdbbc3c4500db5773fa3cd350b6195ed
initialize 9af2b1986858046540a9da68dbf4f425
initialize bc0f33313f39cc49b60566dcb7f169f9
[35]:
for job in project:
    print(job.statepoint(), job.document)
{'l': 8.5} {'eff': 0.9863407492968844, 'ratio': 0.9811772537321698}
{'l': 7.625} {'eff': 0.9848879657314651, 'ratio': 0.8947585604933013}
{'l': 5.0} {'eff': 0.9825664117220569, 'ratio': 0.3849269684345477}
{'l': 6.75} {'eff': 0.9824395884480177, 'ratio': 0.7526832110290623}
{'l': 11.125} {'eff': 0.9864517179631355, 'ratio': 0.8200637938272712}
{'l': 5.875} {'eff': 0.9825395966097096, 'ratio': 0.5741903063499177}
{'l': 10.25} {'eff': 0.9870632007683607, 'ratio': 0.9402367369783092}
{'l': 9.375} {'eff': 0.9873992735509358, 'ratio': 0.9971435263562974}
{'l': 12.0} {'eff': 0.9848516730552885, 'ratio': 0.6529562770748548}

Finally, we can consolidate and plot our results.

[36]:
ls =  np.array([job.statepoint()["l"] for job in project])
effs = np.array([job.document["eff"] for job in project])
ratios = np.array([job.document["ratio"] for job in project])
[37]:
plt.scatter(ls, 100 * ratios, label="split ratio")
plt.scatter(ls, 100 * effs, label="efficiency")
plt.xlabel("Coupling length (Β΅m)")
plt.ylabel("value (%)")
plt.ylim(0, 100)
plt.legend()
plt.show()

../_images/notebooks_ParameterScan_66_0.png

For more information, we highly recommend you visit signac documentation page, which includes explanations about all of the many other features not covered here, which could assist you in your parameter scans using Tidy3D.