tidy3d.SpatialDataArray
tidy3d.SpatialDataArray#
- class tidy3d.SpatialDataArray(data: typing.Any = <NA>, coords: typing.Optional[typing.Union[collections.abc.Sequence[collections.abc.Sequence[Any] | pandas.core.indexes.base.Index | xarray.core.dataarray.DataArray], collections.abc.Mapping[typing.Any, typing.Any]]] = None, dims: typing.Optional[typing.Union[collections.abc.Hashable, collections.abc.Sequence[collections.abc.Hashable]]] = None, name: typing.Optional[collections.abc.Hashable] = None, attrs: typing.Optional[collections.abc.Mapping] = None, indexes: typing.Optional[collections.abc.Mapping[typing.Any, xarray.core.indexes.Index]] = None, fastpath: bool = False)#
Bases:
tidy3d.components.data.data_array.DataArray
Spatial distribution.
Example
>>> x = [1,2] >>> y = [2,3,4] >>> z = [3,4,5,6] >>> coords = dict(x=x, y=y, z=z) >>> fd = SpatialDataArray((1+1j) * np.random.random((2,3,4)), coords=coords)
- __init__(data: typing.Any = <NA>, coords: typing.Optional[typing.Union[collections.abc.Sequence[collections.abc.Sequence[Any] | pandas.core.indexes.base.Index | xarray.core.dataarray.DataArray], collections.abc.Mapping[typing.Any, typing.Any]]] = None, dims: typing.Optional[typing.Union[collections.abc.Hashable, collections.abc.Sequence[collections.abc.Hashable]]] = None, name: typing.Optional[collections.abc.Hashable] = None, attrs: typing.Optional[collections.abc.Mapping] = None, indexes: typing.Optional[collections.abc.Mapping[typing.Any, xarray.core.indexes.Index]] = None, fastpath: bool = False) None #
Methods
__init__
([data, coords, dims, name, attrs, ...])all
([dim, keep_attrs])Reduce this DataArray's data by applying
all
along some dimension(s).any
([dim, keep_attrs])Reduce this DataArray's data by applying
any
along some dimension(s).argmax
([dim, axis, keep_attrs, skipna])Index or indices of the maximum of the DataArray over one or more dimensions.
argmin
([dim, axis, keep_attrs, skipna])Index or indices of the minimum of the DataArray over one or more dimensions.
argsort
([axis, kind, order])Returns the indices that would sort this array.
as_numpy
()Coerces wrapped data and coordinates into numpy arrays, returning a DataArray.
assign_attrs
(*args, **kwargs)Assign new attrs to this object.
assign_coord_attrs
(val)Assign the correct coordinate attributes to the
DataArray
.assign_coords
([coords])Assign new coordinates to this object.
assign_data_attrs
(val)Assign the correct data attributes to the
DataArray
.astype
(dtype, *[, order, casting, subok, ...])Copy of the xarray object, with data cast to a specified type.
bfill
(dim[, limit])Fill NaN values by propagating values backward
broadcast_equals
(other)Two DataArrays are broadcast equal if they are equal after broadcasting them against each other such that they have the same dimensions.
broadcast_like
(other, *[, exclude])Broadcast this DataArray against another Dataset or DataArray.
check_unloaded_data
(val)If the data comes in as the raw data array string, raise a custom warning.
chunk
([chunks, name_prefix, token, lock, ...])Coerce this array's data into a dask arrays with the given chunks.
clip
([min, max, keep_attrs])Return an array whose values are limited to
[min, max]
.close
()Release any resources linked to this object.
coarsen
([dim, boundary, side, coord_func])Coarsen object for DataArrays.
combine_first
(other)Combine two DataArray objects, with union of coordinates.
compute
(**kwargs)Manually trigger loading of this array's data from disk or a remote source into memory and return a new array.
conj
()Complex-conjugate all elements.
Return the complex conjugate, element-wise.
convert_calendar
(calendar[, dim, align_on, ...])Convert the DataArray to another calendar.
copy
([deep, data])Returns a copy of this array.
count
([dim, keep_attrs])Reduce this DataArray's data by applying
count
along some dimension(s).cumprod
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
cumprod
along some dimension(s).cumsum
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
cumsum
along some dimension(s).cumulative_integrate
([coord, datetime_unit])Integrate cumulatively along the given coordinate using the trapezoidal rule.
curvefit
(coords, func[, reduce_dims, ...])Curve fitting optimization for arbitrary functions.
diff
(dim[, n, label])Calculate the n-th order discrete difference along given axis.
differentiate
(coord[, edge_order, datetime_unit])Differentiate the array with the second order accurate central differences.
does_cover
(bounds)Check whether data fully covers specified by
bounds
spatial region.dot
(other[, dim])Perform dot product of two DataArrays along their shared dims.
drop
([labels, dim, errors])Backward compatible method based on drop_vars and drop_sel
drop_duplicates
(dim, *[, keep])Returns a new DataArray with duplicate dimension values removed.
Return a new DataArray without encoding on the array or any attached coords.
drop_indexes
(coord_names, *[, errors])Drop the indexes assigned to the given coordinates.
drop_isel
([indexers])Drop index positions from this DataArray.
drop_sel
([labels, errors])Drop index labels from this DataArray.
drop_vars
(names, *[, errors])Returns an array with dropped variables.
dropna
(dim, *[, how, thresh])Returns a new array with dropped labels for missing values along the provided dimension.
equals
(other)True if two DataArrays have the same dimensions, coordinates and values; otherwise False.
expand_dims
([dim, axis])Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape.
ffill
(dim[, limit])Fill NaN values by propagating values forward
fillna
(value)Fill missing values in this object.
from_dict
(d)Convert a dictionary into an xarray.DataArray
from_file
(fname, group_path)Load an DataArray from an hdf5 file with a given path to the group.
from_hdf5
(fname, group_path)Load an DataArray from an hdf5 file with a given path to the group.
from_iris
(cube)Convert a iris.cube.Cube into an xarray.DataArray
from_series
(series[, sparse])Convert a pandas.Series into an xarray.DataArray.
get_axis_num
(dim)Return axis number(s) corresponding to dimension(s) in this array.
get_index
(key)Get an index for a dimension, with fall-back to a default RangeIndex
groupby
(group[, squeeze, restore_coord_dims])Returns a DataArrayGroupBy object for performing grouped operations.
groupby_bins
(group, bins[, right, labels, ...])Returns a DataArrayGroupBy object for performing grouped operations.
head
([indexers])Return a new DataArray whose data is given by the the first n values along the specified dimension(s).
identical
(other)Like equals, but also checks the array name and attributes, and attributes on all coordinates.
idxmax
([dim, skipna, fill_value, keep_attrs])Return the coordinate label of the maximum value along a dimension.
idxmin
([dim, skipna, fill_value, keep_attrs])Return the coordinate label of the minimum value along a dimension.
integrate
([coord, datetime_unit])Integrate along the given coordinate using the trapezoidal rule.
interp
([coords, method, assume_sorted, kwargs])Interpolate a DataArray onto new coordinates
interp_calendar
(target[, dim])Interpolates the DataArray to another calendar based on decimal year measure.
interp_like
(other[, method, assume_sorted, ...])Interpolate this object onto the coordinates of another object, filling out of range values with NaN.
interpolate_na
([dim, method, limit, ...])Fill in NaNs by interpolating according to different methods.
isel
([indexers, drop, missing_dims])Return a new DataArray whose data is given by selecting indexes along the specified dimension(s).
isin
(test_elements)Tests each value in the array for whether it is in test elements.
isnull
([keep_attrs])Test each value in the array for whether it is a missing value.
item
(*args)Copy an element of an array to a standard Python scalar and return it.
load
(**kwargs)Manually trigger loading of this array's data from disk or a remote source into memory and return this array.
map_blocks
(func[, args, kwargs, template])Apply a function to each block of this DataArray.
max
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
max
along some dimension(s).mean
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
mean
along some dimension(s).median
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
median
along some dimension(s).min
([dim, skipna, keep_attrs])Reduce this DataArray's data by applying
min
along some dimension(s).multiply_at
(value, coord_name, indices)Multiply self by value at indices into .
notnull
([keep_attrs])Test each value in the array for whether it is not a missing value.
pad
([pad_width, mode, stat_length, ...])Pad this array along one or more dimensions.
persist
(**kwargs)Trigger computation in constituent dask arrays
pipe
(func, *args, **kwargs)Apply
func(self, *args, **kwargs)
polyfit
(dim, deg[, skipna, rcond, w, full, cov])Least squares polynomial fit.
prod
([dim, skipna, min_count, keep_attrs])Reduce this DataArray's data by applying
prod
along some dimension(s).quantile
(q[, dim, method, keep_attrs, ...])Compute the qth quantile of the data along the specified dimension.
query
([queries, parser, engine, missing_dims])Return a new data array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the values in the array.
rank
(dim, *[, pct, keep_attrs])Ranks the data.
reduce
(func[, dim, axis, keep_attrs, keepdims])Reduce this array by applying func along some dimension(s).
reflect
(axis, center)Reflect data across the plane define by parameters
axis
andcenter
from right to left.reindex
([indexers, method, tolerance, copy, ...])Conform this object onto the indexes of another object, filling in missing values with
fill_value
.reindex_like
(other, *[, method, tolerance, ...])Conform this object onto the indexes of another object, for indexes which the objects share.
rename
([new_name_or_name_dict])Returns a new DataArray with renamed coordinates, dimensions or a new name.
reorder_levels
([dim_order])Rearrange index levels using input order.
resample
([indexer, skipna, closed, label, ...])Returns a Resample object for performing resampling operations.
reset_coords
([names, drop])Given names of coordinates, reset them to become variables.
reset_encoding
()reset_index
(dims_or_levels[, drop])Reset the specified index(es) or multi-index level(s).
roll
([shifts, roll_coords])Roll this array by an offset along one or more dimensions.
rolling
([dim, min_periods, center])Rolling window object for DataArrays.
rolling_exp
([window, window_type])Exponentially-weighted moving window.
round
(*args, **kwargs)Round an array to the given number of decimals.
searchsorted
(v[, side, sorter])Find indices where elements of v should be inserted in a to maintain order.
sel
([indexers, method, tolerance, drop])Return a new DataArray whose data is given by selecting index labels along the specified dimension(s).
sel_inside
(bounds)Return a new SpatialDataArray that contains the minimal amount data necessary to cover a spatial region defined by
bounds
.set_close
(close)Register the function that releases any resources linked to this object.
set_index
([indexes, append])Set DataArray (multi-)indexes using one or more existing coordinates.
set_xindex
(coord_names[, index_cls])Set a new, Xarray-compatible index from one or more existing coordinate(s).
shift
([shifts, fill_value])Shift this DataArray by an offset along one or more dimensions.
sortby
(variables[, ascending])Sort object by labels or values (along an axis).
squeeze
([dim, drop, axis])Return a new object with squeezed data.
stack
([dimensions, create_index, index_cls])Stack any number of existing dimensions into a single new dimension.
std
([dim, skipna, ddof, keep_attrs])Reduce this DataArray's data by applying
std
along some dimension(s).sum
([dim, skipna, min_count, keep_attrs])Reduce this DataArray's data by applying
sum
along some dimension(s).swap_dims
([dims_dict])Returns a new DataArray with swapped dimensions.
tail
([indexers])Return a new DataArray whose data is given by the the last n values along the specified dimension(s).
thin
([indexers])Return a new DataArray whose data is given by each n value along the specified dimension(s).
to_dask_dataframe
([dim_order, set_index])Convert this array into a dask.dataframe.DataFrame.
to_dataframe
([name, dim_order])Convert this array and its coordinates into a tidy pandas.DataFrame.
to_dataset
([dim, name, promote_attrs])Convert a DataArray to a Dataset.
to_dict
([data, encoding])Convert this xarray.DataArray into a dictionary following xarray naming conventions.
to_hdf5
(fname, group_path)Save an xr.DataArray to the hdf5 file with a given path to the group.
to_index
()Convert this variable to a pandas.Index.
to_iris
()Convert this array into a iris.cube.Cube
to_masked_array
([copy])Convert this array into a numpy.ma.MaskedArray
to_netcdf
([path, mode, format, group, ...])Write DataArray contents to a netCDF file.
to_numpy
()Coerces wrapped data to numpy and returns a numpy.ndarray.
Convert this array into a pandas object with the same shape.
Convert this array into a pandas.Series.
to_unstacked_dataset
(dim[, level])Unstack DataArray expanding to Dataset along a given level of a stacked coordinate.
to_zarr
([store, chunk_store, mode, ...])Write DataArray contents to a Zarr store
transpose
(*dims[, transpose_coords, ...])Return a new DataArray object with transposed dimensions.
Unify chunk size along all chunked dimensions of this DataArray.
unstack
([dim, fill_value, sparse])Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.
validate_dims
(val)Make sure the dims are the same as _dims, then put them in the correct order.
var
([dim, skipna, ddof, keep_attrs])Reduce this DataArray's data by applying
var
along some dimension(s).weighted
(weights)Weighted DataArray operations.
where
(cond[, other, drop])Filter elements from this object according to a condition.
Attributes
T
Absolute value of data array.
Dictionary storing arbitrary metadata with this array.
Tuple of block lengths for this dataarray's data, in order of dimensions, or None if the underlying data is not a dask array.
Mapping from dimension names to block lengths for this dataarray's data, or None if the underlying data is not a dask array.
Mapping of
DataArray
objects corresponding to coordinate variables.The DataArray's data as an array.
Tuple of dimension names associated with this array.
dt
alias of
xarray.core.accessor_dt.CombinedDatetimelikeAccessor
[DataArray
]Data-type of the array’s elements.
Dictionary of format-specific settings for how this array should be serialized.
The imaginary part of the array.
Mapping of pandas.Index objects used for label based indexing.
Attribute for location based indexing like pandas.
The name of this array.
Total bytes consumed by the elements of this DataArray's data.
Number of array dimensions.
The real part of the array.
Tuple of array dimensions.
Number of elements in the array.
Ordered mapping from dimension names to lengths.
str
alias of
xarray.core.accessor_str.StringAccessor
[DataArray
]The array's data as a numpy.ndarray.
Low level interface to the Variable object for this DataArray.
Mapping of
Index
objects used for label based indexing.- __abs__() typing_extensions.Self #
Same as abs(a).
- __add__(other: DaCompatible) Self #
Same as a + b.
- __and__(other: DaCompatible) Self #
Same as a & b.
- __dir__() list[str] #
Provide method name lookup and completion. Only provide ‘public’ methods.
- __eq__(other) bool #
Whether two data array objects are equal.
- __floordiv__(other: DaCompatible) Self #
Same as a // b.
- __ge__(other: DaCompatible) Self #
Same as a >= b.
- classmethod __get_validators__()#
Validators that get run when
DataArray
objects are added to pydantic models.
- __gt__(other: DaCompatible) Self #
Same as a > b.
- __hash__() int #
Generate hash value for a :class:.`DataArray` instance, needed for custom components.
- __iadd__(other: DaCompatible) Self #
Same as a += b.
- __iand__(other: DaCompatible) Self #
Same as a &= b.
- __ifloordiv__(other: DaCompatible) Self #
Same as a //= b.
- __ilshift__(other: DaCompatible) Self #
Same as a <<= b.
- __imod__(other: DaCompatible) Self #
Same as a %= b.
- classmethod __init_subclass__(**kwargs)#
Verify that all subclasses explicitly define
__slots__
. If they don’t, raise error in the core xarray module and a FutureWarning in third-party extensions.
- __invert__() typing_extensions.Self #
Same as ~a.
- __irshift__(other: DaCompatible) Self #
Same as a >>= b.
- __isub__(other: DaCompatible) Self #
Same as a -= b.
- __itruediv__(other: DaCompatible) Self #
Same as a /= b.
- __ixor__(other: DaCompatible) Self #
Same as a ^= b.
- __le__(other: DaCompatible) Self #
Same as a <= b.
- __lshift__(other: DaCompatible) Self #
Same as a << b.
- __lt__(other: DaCompatible) Self #
Same as a < b.
- __mod__(other: DaCompatible) Self #
Same as a % b.
- classmethod __modify_schema__(field_schema)#
Sets the schema of DataArray object.
- __mul__(other: DaCompatible) Self #
Same as a * b.
- __neg__() typing_extensions.Self #
Same as -a.
- __or__(other: DaCompatible) Self #
Same as a | b.
- __pos__() typing_extensions.Self #
Same as +a.
- __pow__(other: DaCompatible) Self #
Same as a ** b.
- __radd__(other: DaCompatible) Self #
Same as a + b.
- __rand__(other: DaCompatible) Self #
Same as a & b.
- __rfloordiv__(other: DaCompatible) Self #
Same as a // b.
- __rmod__(other: DaCompatible) Self #
Same as a % b.
- __rmul__(other: DaCompatible) Self #
Same as a * b.
- __ror__(other: DaCompatible) Self #
Same as a | b.
- __rpow__(other: DaCompatible) Self #
Same as a ** b.
- __rshift__(other: DaCompatible) Self #
Same as a >> b.
- __rsub__(other: DaCompatible) Self #
Same as a - b.
- __rtruediv__(other: DaCompatible) Self #
Same as a / b.
- __rxor__(other: DaCompatible) Self #
Same as a ^ b.
- __setattr__(name: str, value: Any) None #
Objects with
__slots__
raise AttributeError if you try setting an undeclared attribute. This is desirable, but the error message could use some improvement.
- __sub__(other: DaCompatible) Self #
Same as a - b.
- __truediv__(other: DaCompatible) Self #
Same as a / b.
- __xor__(other: DaCompatible) Self #
Same as a ^ b.
- property abs#
Absolute value of data array.
- all(dim: Dims = None, *, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
all
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
all
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
all
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
all
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.all
,dask.array.all
,Dataset.all
- agg
User guide on reduction or aggregation operations.
Examples
>>> da = xr.DataArray( ... np.array([True, True, True, True, True, False], dtype=bool), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ True, True, True, True, True, False]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.all() <xarray.DataArray ()> array(False)
- any(dim: Dims = None, *, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
any
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
any
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
any
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
any
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.any
,dask.array.any
,Dataset.any
- agg
User guide on reduction or aggregation operations.
Examples
>>> da = xr.DataArray( ... np.array([True, True, True, True, True, False], dtype=bool), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ True, True, True, True, True, False]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.any() <xarray.DataArray ()> array(True)
- argmax(dim: Dims = None, *, axis: int | None = None, keep_attrs: bool | None = None, skipna: bool | None = None) Self | dict[Hashable, Self] #
Index or indices of the maximum of the DataArray over one or more dimensions.
If a sequence is passed to ‘dim’, then result returned as dict of DataArrays, which can be passed directly to isel(). If a single str is passed to ‘dim’ then returns a DataArray with dtype int.
If there are multiple maxima, the indices of the first one found will be returned.
- Parameters
dim ("...", str, Iterable of Hashable or None, optional) – The dimensions over which to find the maximum. By default, finds maximum over all dimensions - for now returning an int for backward compatibility, but this is deprecated, in future will return a dict with indices for all dimensions; to return a dict with all dimensions now, pass ‘…’.
axis (int or None, optional) – Axis over which to apply argmax. Only one of the ‘dim’ and ‘axis’ arguments can be supplied.
keep_attrs (bool or None, optional) – If True, the attributes (attrs) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).
- Returns
result
- Return type
DataArray or dict of DataArray
See also
Variable.argmax
,DataArray.idxmax
Examples
>>> array = xr.DataArray([0, 2, -1, 3], dims="x") >>> array.max() <xarray.DataArray ()> array(3) >>> array.argmax(...) {'x': <xarray.DataArray ()> array(3)} >>> array.isel(array.argmax(...)) <xarray.DataArray ()> array(3)
>>> array = xr.DataArray( ... [[[3, 2, 1], [3, 1, 2], [2, 1, 3]], [[1, 3, 2], [2, 5, 1], [2, 3, 1]]], ... dims=("x", "y", "z"), ... ) >>> array.max(dim="x") <xarray.DataArray (y: 3, z: 3)> array([[3, 3, 2], [3, 5, 2], [2, 3, 3]]) Dimensions without coordinates: y, z >>> array.argmax(dim="x") <xarray.DataArray (y: 3, z: 3)> array([[0, 1, 1], [0, 1, 0], [0, 1, 0]]) Dimensions without coordinates: y, z >>> array.argmax(dim=["x"]) {'x': <xarray.DataArray (y: 3, z: 3)> array([[0, 1, 1], [0, 1, 0], [0, 1, 0]]) Dimensions without coordinates: y, z} >>> array.max(dim=("x", "z")) <xarray.DataArray (y: 3)> array([3, 5, 3]) Dimensions without coordinates: y >>> array.argmax(dim=["x", "z"]) {'x': <xarray.DataArray (y: 3)> array([0, 1, 0]) Dimensions without coordinates: y, 'z': <xarray.DataArray (y: 3)> array([0, 1, 2]) Dimensions without coordinates: y} >>> array.isel(array.argmax(dim=["x", "z"])) <xarray.DataArray (y: 3)> array([3, 5, 3]) Dimensions without coordinates: y
- argmin(dim: Dims = None, *, axis: int | None = None, keep_attrs: bool | None = None, skipna: bool | None = None) Self | dict[Hashable, Self] #
Index or indices of the minimum of the DataArray over one or more dimensions.
If a sequence is passed to ‘dim’, then result returned as dict of DataArrays, which can be passed directly to isel(). If a single str is passed to ‘dim’ then returns a DataArray with dtype int.
If there are multiple minima, the indices of the first one found will be returned.
- Parameters
dim ("...", str, Iterable of Hashable or None, optional) – The dimensions over which to find the minimum. By default, finds minimum over all dimensions - for now returning an int for backward compatibility, but this is deprecated, in future will return a dict with indices for all dimensions; to return a dict with all dimensions now, pass ‘…’.
axis (int or None, optional) – Axis over which to apply argmin. Only one of the ‘dim’ and ‘axis’ arguments can be supplied.
keep_attrs (bool or None, optional) – If True, the attributes (attrs) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).
- Returns
result
- Return type
DataArray or dict of DataArray
See also
Variable.argmin
,DataArray.idxmin
Examples
>>> array = xr.DataArray([0, 2, -1, 3], dims="x") >>> array.min() <xarray.DataArray ()> array(-1) >>> array.argmin(...) {'x': <xarray.DataArray ()> array(2)} >>> array.isel(array.argmin(...)) <xarray.DataArray ()> array(-1)
>>> array = xr.DataArray( ... [[[3, 2, 1], [3, 1, 2], [2, 1, 3]], [[1, 3, 2], [2, -5, 1], [2, 3, 1]]], ... dims=("x", "y", "z"), ... ) >>> array.min(dim="x") <xarray.DataArray (y: 3, z: 3)> array([[ 1, 2, 1], [ 2, -5, 1], [ 2, 1, 1]]) Dimensions without coordinates: y, z >>> array.argmin(dim="x") <xarray.DataArray (y: 3, z: 3)> array([[1, 0, 0], [1, 1, 1], [0, 0, 1]]) Dimensions without coordinates: y, z >>> array.argmin(dim=["x"]) {'x': <xarray.DataArray (y: 3, z: 3)> array([[1, 0, 0], [1, 1, 1], [0, 0, 1]]) Dimensions without coordinates: y, z} >>> array.min(dim=("x", "z")) <xarray.DataArray (y: 3)> array([ 1, -5, 1]) Dimensions without coordinates: y >>> array.argmin(dim=["x", "z"]) {'x': <xarray.DataArray (y: 3)> array([0, 1, 0]) Dimensions without coordinates: y, 'z': <xarray.DataArray (y: 3)> array([2, 1, 1]) Dimensions without coordinates: y} >>> array.isel(array.argmin(dim=["x", "z"])) <xarray.DataArray (y: 3)> array([ 1, -5, 1]) Dimensions without coordinates: y
- argsort(axis=- 1, kind=None, order=None)#
Returns the indices that would sort this array.
Refer to numpy.argsort for full documentation.
See also
numpy.argsort
equivalent function
- as_numpy() Self #
Coerces wrapped data and coordinates into numpy arrays, returning a DataArray.
See also
DataArray.to_numpy
Same but returns only the data as a numpy.ndarray object.
Dataset.as_numpy
Converts all variables in a Dataset.
DataArray.values
,DataArray.data
- assign_attrs(*args: Any, **kwargs: Any) Self #
Assign new attrs to this object.
Returns a new object equivalent to
self.attrs.update(*args, **kwargs)
.- Parameters
*args – positional arguments passed into
attrs.update
.**kwargs – keyword arguments passed into
attrs.update
.
Examples
>>> dataset = xr.Dataset({"temperature": [25, 30, 27]}) >>> dataset <xarray.Dataset> Dimensions: (temperature: 3) Coordinates: * temperature (temperature) int64 25 30 27 Data variables: *empty*
>>> new_dataset = dataset.assign_attrs( ... units="Celsius", description="Temperature data" ... ) >>> new_dataset <xarray.Dataset> Dimensions: (temperature: 3) Coordinates: * temperature (temperature) int64 25 30 27 Data variables: *empty* Attributes: units: Celsius description: Temperature data
# Attributes of the new dataset
>>> new_dataset.attrs {'units': 'Celsius', 'description': 'Temperature data'}
- Returns
assigned – A new object with the new attrs in addition to the existing data.
- Return type
same type as caller
See also
Dataset.assign
- classmethod assign_coord_attrs(val)#
Assign the correct coordinate attributes to the
DataArray
.
- assign_coords(coords: Mapping | None = None, **coords_kwargs: Any) Self #
Assign new coordinates to this object.
Returns a new object with all the original data in addition to the new coordinates.
- Parameters
coords (mapping of dim to coord, optional) –
A mapping whose keys are the names of the coordinates and values are the coordinates to assign. The mapping will generally be a dict or
Coordinates
.If a value is a standard data value — for example, a
DataArray
, scalar, or array — the data is simply assigned as a coordinate.If a value is callable, it is called with this object as the only parameter, and the return value is used as new coordinate variables.
A coordinate can also be defined and attached to an existing dimension using a tuple with the first element the dimension name and the second element the values for this new coordinate.
**coords_kwargs (optional) – The keyword arguments form of
coords
. One ofcoords
orcoords_kwargs
must be provided.
- Returns
assigned – A new object with the new coordinates in addition to the existing data.
- Return type
same type as caller
Examples
Convert DataArray longitude coordinates from 0-359 to -180-179:
>>> da = xr.DataArray( ... np.random.rand(4), ... coords=[np.array([358, 359, 0, 1])], ... dims="lon", ... ) >>> da <xarray.DataArray (lon: 4)> array([0.5488135 , 0.71518937, 0.60276338, 0.54488318]) Coordinates: * lon (lon) int64 358 359 0 1 >>> da.assign_coords(lon=(((da.lon + 180) % 360) - 180)) <xarray.DataArray (lon: 4)> array([0.5488135 , 0.71518937, 0.60276338, 0.54488318]) Coordinates: * lon (lon) int64 -2 -1 0 1
The function also accepts dictionary arguments:
>>> da.assign_coords({"lon": (((da.lon + 180) % 360) - 180)}) <xarray.DataArray (lon: 4)> array([0.5488135 , 0.71518937, 0.60276338, 0.54488318]) Coordinates: * lon (lon) int64 -2 -1 0 1
New coordinate can also be attached to an existing dimension:
>>> lon_2 = np.array([300, 289, 0, 1]) >>> da.assign_coords(lon_2=("lon", lon_2)) <xarray.DataArray (lon: 4)> array([0.5488135 , 0.71518937, 0.60276338, 0.54488318]) Coordinates: * lon (lon) int64 358 359 0 1 lon_2 (lon) int64 300 289 0 1
Note that the same result can also be obtained with a dict e.g.
>>> _ = da.assign_coords({"lon_2": ("lon", lon_2)})
Note the same method applies to Dataset objects.
Convert Dataset longitude coordinates from 0-359 to -180-179:
>>> temperature = np.linspace(20, 32, num=16).reshape(2, 2, 4) >>> precipitation = 2 * np.identity(4).reshape(2, 2, 4) >>> ds = xr.Dataset( ... data_vars=dict( ... temperature=(["x", "y", "time"], temperature), ... precipitation=(["x", "y", "time"], precipitation), ... ), ... coords=dict( ... lon=(["x", "y"], [[260.17, 260.68], [260.21, 260.77]]), ... lat=(["x", "y"], [[42.25, 42.21], [42.63, 42.59]]), ... time=pd.date_range("2014-09-06", periods=4), ... reference_time=pd.Timestamp("2014-09-05"), ... ), ... attrs=dict(description="Weather-related data"), ... ) >>> ds <xarray.Dataset> Dimensions: (x: 2, y: 2, time: 4) Coordinates: lon (x, y) float64 260.2 260.7 260.2 260.8 lat (x, y) float64 42.25 42.21 42.63 42.59 * time (time) datetime64[ns] 2014-09-06 2014-09-07 ... 2014-09-09 reference_time datetime64[ns] 2014-09-05 Dimensions without coordinates: x, y Data variables: temperature (x, y, time) float64 20.0 20.8 21.6 22.4 ... 30.4 31.2 32.0 precipitation (x, y, time) float64 2.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 2.0 Attributes: description: Weather-related data >>> ds.assign_coords(lon=(((ds.lon + 180) % 360) - 180)) <xarray.Dataset> Dimensions: (x: 2, y: 2, time: 4) Coordinates: lon (x, y) float64 -99.83 -99.32 -99.79 -99.23 lat (x, y) float64 42.25 42.21 42.63 42.59 * time (time) datetime64[ns] 2014-09-06 2014-09-07 ... 2014-09-09 reference_time datetime64[ns] 2014-09-05 Dimensions without coordinates: x, y Data variables: temperature (x, y, time) float64 20.0 20.8 21.6 22.4 ... 30.4 31.2 32.0 precipitation (x, y, time) float64 2.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 2.0 Attributes: description: Weather-related data
See also
Dataset.assign
,Dataset.swap_dims
,Dataset.set_coords
- classmethod assign_data_attrs(val)#
Assign the correct data attributes to the
DataArray
.
- astype(dtype, *, order=None, casting=None, subok=None, copy=None, keep_attrs=True) Self #
Copy of the xarray object, with data cast to a specified type. Leaves coordinate dtype unchanged.
- Parameters
dtype (str or dtype) – Typecode or data-type to which the array is cast.
order ({'C', 'F', 'A', 'K'}, optional) – Controls the memory layout order of the result. ‘C’ means C order, ‘F’ means Fortran order, ‘A’ means ‘F’ order if all the arrays are Fortran contiguous, ‘C’ order otherwise, and ‘K’ means as close to the order the array elements appear in memory as possible.
casting ({'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional) –
Controls what kind of data casting may occur.
’no’ means the data types should not be cast at all.
’equiv’ means only byte-order changes are allowed.
’safe’ means only casts which can preserve values are allowed.
’same_kind’ means only safe casts or casts within a kind, like float64 to float32, are allowed.
’unsafe’ means any data conversions may be done.
subok (bool, optional) – If True, then sub-classes will be passed-through, otherwise the returned array will be forced to be a base-class array.
copy (bool, optional) – By default, astype always returns a newly allocated array. If this is set to False and the dtype requirement is satisfied, the input array is returned instead of a copy.
keep_attrs (bool, optional) – By default, astype keeps attributes. Set to False to remove attributes in the returned object.
- Returns
out – New object with data cast to the specified type.
- Return type
same as object
Notes
The
order
,casting
,subok
andcopy
arguments are only passed through to theastype
method of the underlying array when a value different thanNone
is supplied. Make sure to only supply these arguments if the underlying array class supports them.See also
numpy.ndarray.astype
,dask.array.Array.astype
,sparse.COO.astype
- property attrs: dict[Any, Any]#
Dictionary storing arbitrary metadata with this array.
- bfill(dim: Hashable, limit: int | None = None) Self #
Fill NaN values by propagating values backward
Requires bottleneck.
- Parameters
dim (str) – Specifies the dimension along which to propagate values when filling.
limit (int or None, default: None) – The maximum number of consecutive NaN values to backward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. Must be greater than 0 or None for no limit. Must be None or greater than or equal to axis length if filling along chunked axes (dimensions).
- Returns
filled
- Return type
DataArray
Examples
>>> temperature = np.array( ... [ ... [0, 1, 3], ... [0, np.nan, 5], ... [5, np.nan, np.nan], ... [3, np.nan, np.nan], ... [np.nan, 2, 0], ... ] ... ) >>> da = xr.DataArray( ... data=temperature, ... dims=["Y", "X"], ... coords=dict( ... lat=("Y", np.array([-20.0, -20.25, -20.50, -20.75, -21.0])), ... lon=("X", np.array([10.0, 10.25, 10.5])), ... ), ... ) >>> da <xarray.DataArray (Y: 5, X: 3)> array([[ 0., 1., 3.], [ 0., nan, 5.], [ 5., nan, nan], [ 3., nan, nan], [nan, 2., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 -21.0 lon (X) float64 10.0 10.25 10.5 Dimensions without coordinates: Y, X
Fill all NaN values:
>>> da.bfill(dim="Y", limit=None) <xarray.DataArray (Y: 5, X: 3)> array([[ 0., 1., 3.], [ 0., 2., 5.], [ 5., 2., 0.], [ 3., 2., 0.], [nan, 2., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 -21.0 lon (X) float64 10.0 10.25 10.5 Dimensions without coordinates: Y, X
Fill only the first of consecutive NaN values:
>>> da.bfill(dim="Y", limit=1) <xarray.DataArray (Y: 5, X: 3)> array([[ 0., 1., 3.], [ 0., nan, 5.], [ 5., nan, nan], [ 3., 2., 0.], [nan, 2., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 -21.0 lon (X) float64 10.0 10.25 10.5 Dimensions without coordinates: Y, X
- broadcast_equals(other: Self) bool #
Two DataArrays are broadcast equal if they are equal after broadcasting them against each other such that they have the same dimensions.
- Parameters
other (DataArray) – DataArray to compare to.
- Returns
equal – True if the two DataArrays are broadcast equal.
- Return type
bool
See also
DataArray.equals
,DataArray.identical
Examples
>>> a = xr.DataArray([1, 2], dims="X") >>> b = xr.DataArray([[1, 1], [2, 2]], dims=["X", "Y"]) >>> a <xarray.DataArray (X: 2)> array([1, 2]) Dimensions without coordinates: X >>> b <xarray.DataArray (X: 2, Y: 2)> array([[1, 1], [2, 2]]) Dimensions without coordinates: X, Y
.equals returns True if two DataArrays have the same values, dimensions, and coordinates. .broadcast_equals returns True if the results of broadcasting two DataArrays against each other have the same values, dimensions, and coordinates.
>>> a.equals(b) False >>> a2, b2 = xr.broadcast(a, b) >>> a2.equals(b2) True >>> a.broadcast_equals(b) True
- broadcast_like(other: T_DataArrayOrSet, *, exclude: Iterable[Hashable] | None = None) Self #
Broadcast this DataArray against another Dataset or DataArray.
This is equivalent to xr.broadcast(other, self)[1]
xarray objects are broadcast against each other in arithmetic operations, so this method is not be necessary for most uses.
If no change is needed, the input data is returned to the output without being copied.
If new coords are added by the broadcast, their values are NaN filled.
- Parameters
other (Dataset or DataArray) – Object against which to broadcast this array.
exclude (iterable of Hashable, optional) – Dimensions that must not be broadcasted
- Returns
new_da – The caller broadcasted against
other
.- Return type
DataArray
Examples
>>> arr1 = xr.DataArray( ... np.random.randn(2, 3), ... dims=("x", "y"), ... coords={"x": ["a", "b"], "y": ["a", "b", "c"]}, ... ) >>> arr2 = xr.DataArray( ... np.random.randn(3, 2), ... dims=("x", "y"), ... coords={"x": ["a", "b", "c"], "y": ["a", "b"]}, ... ) >>> arr1 <xarray.DataArray (x: 2, y: 3)> array([[ 1.76405235, 0.40015721, 0.97873798], [ 2.2408932 , 1.86755799, -0.97727788]]) Coordinates: * x (x) <U1 'a' 'b' * y (y) <U1 'a' 'b' 'c' >>> arr2 <xarray.DataArray (x: 3, y: 2)> array([[ 0.95008842, -0.15135721], [-0.10321885, 0.4105985 ], [ 0.14404357, 1.45427351]]) Coordinates: * x (x) <U1 'a' 'b' 'c' * y (y) <U1 'a' 'b' >>> arr1.broadcast_like(arr2) <xarray.DataArray (x: 3, y: 3)> array([[ 1.76405235, 0.40015721, 0.97873798], [ 2.2408932 , 1.86755799, -0.97727788], [ nan, nan, nan]]) Coordinates: * x (x) <U1 'a' 'b' 'c' * y (y) <U1 'a' 'b' 'c'
- classmethod check_unloaded_data(val)#
If the data comes in as the raw data array string, raise a custom warning.
- chunk(chunks: T_Chunks = {}, *, name_prefix: str = 'xarray-', token: str | None = None, lock: bool = False, inline_array: bool = False, chunked_array_type: str | ChunkManagerEntrypoint | None = None, from_array_kwargs=None, **chunks_kwargs: Any) Self #
Coerce this array’s data into a dask arrays with the given chunks.
If this variable is a non-dask array, it will be converted to dask array. If it’s a dask array, it will be rechunked to the given chunk sizes.
If neither chunks is not provided for one or more dimensions, chunk sizes along that dimension will not be updated; non-dask arrays will be converted into dask arrays with a single block.
- Parameters
chunks (int, "auto", tuple of int or mapping of Hashable to int, optional) – Chunk sizes along each dimension, e.g.,
5
,"auto"
,(5, 5)
or{"x": 5, "y": 5}
.name_prefix (str, optional) – Prefix for the name of the new dask array.
token (str, optional) – Token uniquely identifying this array.
lock (bool, default: False) – Passed on to
dask.array.from_array()
, if the array is not already as dask array.inline_array (bool, default: False) – Passed on to
dask.array.from_array()
, if the array is not already as dask array.chunked_array_type (str, optional) – Which chunked array type to coerce the underlying data array to. Defaults to ‘dask’ if installed, else whatever is registered via the ChunkManagerEntryPoint system. Experimental API that should not be relied upon.
from_array_kwargs (dict, optional) – Additional keyword arguments passed on to the ChunkManagerEntrypoint.from_array method used to create chunked arrays, via whichever chunk manager is specified through the chunked_array_type kwarg. For example, with dask as the default chunked array type, this method would pass additional kwargs to
dask.array.from_array()
. Experimental API that should not be relied upon.**chunks_kwargs ({dim: chunks, ...}, optional) – The keyword arguments form of
chunks
. One of chunks or chunks_kwargs must be provided.
- Returns
chunked
- Return type
xarray.DataArray
See also
DataArray.chunks
,DataArray.chunksizes
,xarray.unify_chunks
,dask.array.from_array
- property chunks: tuple[tuple[int, ...], ...] | None#
Tuple of block lengths for this dataarray’s data, in order of dimensions, or None if the underlying data is not a dask array.
See also
DataArray.chunk
,DataArray.chunksizes
,xarray.unify_chunks
- property chunksizes: collections.abc.Mapping[Any, tuple[int, ...]]#
Mapping from dimension names to block lengths for this dataarray’s data, or None if the underlying data is not a dask array. Cannot be modified directly, but can be modified by calling .chunk().
Differs from DataArray.chunks because it returns a mapping of dimensions to chunk shapes instead of a tuple of chunk shapes.
See also
DataArray.chunk
,DataArray.chunks
,xarray.unify_chunks
- clip(min: ScalarOrArray | None = None, max: ScalarOrArray | None = None, *, keep_attrs: bool | None = None) Self #
Return an array whose values are limited to
[min, max]
. At least one of max or min must be given.- Parameters
min (None or Hashable, optional) – Minimum value. If None, no lower clipping is performed.
max (None or Hashable, optional) – Maximum value. If None, no upper clipping is performed.
keep_attrs (bool or None, optional) – If True, the attributes (attrs) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
- Returns
clipped – This object, but with with values < min are replaced with min, and those > max with max.
- Return type
same type as caller
See also
numpy.clip
equivalent function
- close() None #
Release any resources linked to this object.
- coarsen(dim: Mapping[Any, int] | None = None, boundary: CoarsenBoundaryOptions = 'exact', side: SideOptions | Mapping[Any, SideOptions] = 'left', coord_func: str | Callable | Mapping[Any, str | Callable] = 'mean', **window_kwargs: int) DataArrayCoarsen #
Coarsen object for DataArrays.
- Parameters
dim (mapping of hashable to int, optional) – Mapping from the dimension name to the window size.
boundary ({"exact", "trim", "pad"}, default: "exact") – If ‘exact’, a ValueError will be raised if dimension size is not a multiple of the window size. If ‘trim’, the excess entries are dropped. If ‘pad’, NA will be padded.
side ({"left", "right"} or mapping of str to {"left", "right"}, default: "left") –
coord_func (str or mapping of hashable to str, default: "mean") – function (name) that is applied to the coordinates, or a mapping from coordinate name to function (name).
- Return type
core.rolling.DataArrayCoarsen
Examples
Coarsen the long time series by averaging over every three days.
>>> da = xr.DataArray( ... np.linspace(0, 364, num=364), ... dims="time", ... coords={"time": pd.date_range("1999-12-15", periods=364)}, ... ) >>> da # +doctest: ELLIPSIS <xarray.DataArray (time: 364)> array([ 0. , 1.00275482, 2.00550964, 3.00826446, 4.01101928, 5.0137741 , 6.01652893, 7.01928375, 8.02203857, 9.02479339, 10.02754821, 11.03030303, ... 356.98071625, 357.98347107, 358.9862259 , 359.98898072, 360.99173554, 361.99449036, 362.99724518, 364. ]) Coordinates: * time (time) datetime64[ns] 1999-12-15 1999-12-16 ... 2000-12-12 >>> da.coarsen(time=3, boundary="trim").mean() # +doctest: ELLIPSIS <xarray.DataArray (time: 121)> array([ 1.00275482, 4.01101928, 7.01928375, 10.02754821, 13.03581267, 16.04407713, 19.0523416 , 22.06060606, 25.06887052, 28.07713499, 31.08539945, 34.09366391, ... 349.96143251, 352.96969697, 355.97796143, 358.9862259 , 361.99449036]) Coordinates: * time (time) datetime64[ns] 1999-12-16 1999-12-19 ... 2000-12-10 >>>
See also
core.rolling.DataArrayCoarsen Dataset.coarsen
- reshape.coarsen
User guide describing
coarsen()
- compute.coarsen
User guide on block arrgragation
coarsen()
- xarray-tutorial:fundamentals/03.3_windowed
Tutorial on windowed computation using
coarsen()
- combine_first(other: Self) Self #
Combine two DataArray objects, with union of coordinates.
This operation follows the normal broadcasting and alignment rules of
join='outer'
. Default to non-null values of array calling the method. Use np.nan to fill in vacant cells after alignment.- Parameters
other (DataArray) – Used to fill all matching missing values in this array.
- Return type
DataArray
- compute(**kwargs) Self #
Manually trigger loading of this array’s data from disk or a remote source into memory and return a new array. The original is left unaltered.
Normally, it should not be necessary to call this method in user code, because all xarray functions should either work on deferred data or load data automatically. However, this method can be necessary when working with many file objects on disk.
- Parameters
**kwargs (dict) – Additional keyword arguments passed on to
dask.compute
.
See also
dask.compute
- conj()#
Complex-conjugate all elements.
Refer to numpy.conjugate for full documentation.
See also
numpy.conjugate
equivalent function
- conjugate()#
Return the complex conjugate, element-wise.
Refer to numpy.conjugate for full documentation.
See also
numpy.conjugate
equivalent function
- convert_calendar(calendar: str, dim: str = 'time', align_on: str | None = None, missing: Any | None = None, use_cftime: bool | None = None) Self #
Convert the DataArray to another calendar.
Only converts the individual timestamps, does not modify any data except in dropping invalid/surplus dates or inserting missing dates.
If the source and target calendars are either no_leap, all_leap or a standard type, only the type of the time array is modified. When converting to a leap year from a non-leap year, the 29th of February is removed from the array. In the other direction the 29th of February will be missing in the output, unless missing is specified, in which case that value is inserted.
For conversions involving 360_day calendars, see Notes.
This method is safe to use with sub-daily data as it doesn’t touch the time part of the timestamps.
- Parameters
calendar (str) – The target calendar name.
dim (str) – Name of the time coordinate.
align_on ({None, 'date', 'year'}) – Must be specified when either source or target is a 360_day calendar, ignored otherwise. See Notes.
missing (Optional[any]) – By default, i.e. if the value is None, this method will simply attempt to convert the dates in the source calendar to the same dates in the target calendar, and drop any of those that are not possible to represent. If a value is provided, a new time coordinate will be created in the target calendar with the same frequency as the original time coordinate; for any dates that are not present in the source, the data will be filled with this value. Note that using this mode requires that the source data have an inferable frequency; for more information see
xarray.infer_freq()
. For certain frequency, source, and target calendar combinations, this could result in many missing values, see notes.use_cftime (boolean, optional) – Whether to use cftime objects in the output, only used if calendar is one of {“proleptic_gregorian”, “gregorian” or “standard”}. If True, the new time axis uses cftime objects. If None (default), it uses
numpy.datetime64
values if the date range permits it, andcftime.datetime
objects if not. If False, it usesnumpy.datetime64
or fails.
- Returns
Copy of the dataarray with the time coordinate converted to the target calendar. If ‘missing’ was None (default), invalid dates in the new calendar are dropped, but missing dates are not inserted. If missing was given, the new data is reindexed to have a time axis with the same frequency as the source, but in the new calendar; any missing datapoints are filled with missing.
- Return type
DataArray
Notes
Passing a value to missing is only usable if the source’s time coordinate as an inferable frequencies (see
infer_freq()
) and is only appropriate if the target coordinate, generated from this frequency, has dates equivalent to the source. It is usually not appropriate to use this mode with:Period-end frequencies : ‘A’, ‘Y’, ‘Q’ or ‘M’, in opposition to ‘AS’ ‘YS’, ‘QS’ and ‘MS’
- Sub-monthly frequencies that do not divide a day evenly‘W’, ‘nD’ where N != 1
or ‘mH’ where 24 % m != 0).
If one of the source or target calendars is “360_day”, align_on must be specified and two options are offered.
- “year”
The dates are translated according to their relative position in the year, ignoring their original month and day information, meaning that the missing/surplus days are added/removed at regular intervals.
From a 360_day to a standard calendar, the output will be missing the following dates (day of year in parentheses):
- To a leap year:
January 31st (31), March 31st (91), June 1st (153), July 31st (213), September 31st (275) and November 30th (335).
- To a non-leap year:
February 6th (36), April 19th (109), July 2nd (183), September 12th (255), November 25th (329).
From a standard calendar to a “360_day”, the following dates in the source array will be dropped:
- From a leap year:
January 31st (31), April 1st (92), June 1st (153), August 1st (214), September 31st (275), December 1st (336)
- From a non-leap year:
February 6th (37), April 20th (110), July 2nd (183), September 13th (256), November 25th (329)
This option is best used on daily and subdaily data.
- “date”
The month/day information is conserved and invalid dates are dropped from the output. This means that when converting from a “360_day” to a standard calendar, all 31st (Jan, March, May, July, August, October and December) will be missing as there is no equivalent dates in the “360_day” calendar and the 29th (on non-leap years) and 30th of February will be dropped as there are no equivalent dates in a standard calendar.
This option is best used with data on a frequency coarser than daily.
- property coords: xarray.core.coordinates.DataArrayCoordinates#
Mapping of
DataArray
objects corresponding to coordinate variables.See also
Coordinates
- copy(deep: bool = True, data: Any = None) Self #
Returns a copy of this array.
If deep=True, a deep copy is made of the data array. Otherwise, a shallow copy is made, and the returned data array’s values are a new view of this data array’s values.
Use data to create a new object with the same structure as original but entirely new data.
- Parameters
deep (bool, optional) – Whether the data array and its coordinates are loaded into memory and copied onto the new object. Default is True.
data (array_like, optional) – Data to use in the new object. Must have same shape as original. When data is used, deep is ignored for all data variables, and only used for coords.
- Returns
copy – New object with dimensions, attributes, coordinates, name, encoding, and optionally data copied from original.
- Return type
DataArray
Examples
Shallow versus deep copy
>>> array = xr.DataArray([1, 2, 3], dims="x", coords={"x": ["a", "b", "c"]}) >>> array.copy() <xarray.DataArray (x: 3)> array([1, 2, 3]) Coordinates: * x (x) <U1 'a' 'b' 'c' >>> array_0 = array.copy(deep=False) >>> array_0[0] = 7 >>> array_0 <xarray.DataArray (x: 3)> array([7, 2, 3]) Coordinates: * x (x) <U1 'a' 'b' 'c' >>> array <xarray.DataArray (x: 3)> array([7, 2, 3]) Coordinates: * x (x) <U1 'a' 'b' 'c'
Changing the data using the
data
argument maintains the structure of the original object, but with the new data. Original object is unaffected.>>> array.copy(data=[0.1, 0.2, 0.3]) <xarray.DataArray (x: 3)> array([0.1, 0.2, 0.3]) Coordinates: * x (x) <U1 'a' 'b' 'c' >>> array <xarray.DataArray (x: 3)> array([7, 2, 3]) Coordinates: * x (x) <U1 'a' 'b' 'c'
See also
pandas.DataFrame.copy
- count(dim: Dims = None, *, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
count
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
count
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
count
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
count
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
pandas.DataFrame.count
,dask.dataframe.DataFrame.count
,Dataset.count
- agg
User guide on reduction or aggregation operations.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.count() <xarray.DataArray ()> array(5)
- cumprod(dim: Dims = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
cumprod
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
cumprod
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
cumprod
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
cumprod
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.cumprod
,dask.array.cumprod
,Dataset.cumprod
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.cumprod() <xarray.DataArray (time: 6)> array([1., 2., 6., 0., 0., 0.]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
Use
skipna
to control whether NaNs are ignored.>>> da.cumprod(skipna=False) <xarray.DataArray (time: 6)> array([ 1., 2., 6., 0., 0., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
- cumsum(dim: Dims = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
cumsum
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
cumsum
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
cumsum
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
cumsum
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.cumsum
,dask.array.cumsum
,Dataset.cumsum
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.cumsum() <xarray.DataArray (time: 6)> array([1., 3., 6., 6., 8., 8.]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
Use
skipna
to control whether NaNs are ignored.>>> da.cumsum(skipna=False) <xarray.DataArray (time: 6)> array([ 1., 3., 6., 6., 8., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
- cumulative_integrate(coord: Hashable | Sequence[Hashable] = None, datetime_unit: DatetimeUnitOptions = None) Self #
Integrate cumulatively along the given coordinate using the trapezoidal rule.
Note
This feature is limited to simple cartesian geometry, i.e. coord must be one dimensional.
The first entry of the cumulative integral is always 0, in order to keep the length of the dimension unchanged between input and output.
- Parameters
coord (Hashable, or sequence of Hashable) – Coordinate(s) used for the integration.
datetime_unit ({'Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns', 'ps', 'fs', 'as', None}, optional) – Specify the unit if a datetime coordinate is used.
- Returns
integrated
- Return type
DataArray
See also
Dataset.cumulative_integrate
scipy.integrate.cumulative_trapezoid
corresponding scipy function
Examples
>>> da = xr.DataArray( ... np.arange(12).reshape(4, 3), ... dims=["x", "y"], ... coords={"x": [0, 0.1, 1.1, 1.2]}, ... ) >>> da <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) float64 0.0 0.1 1.1 1.2 Dimensions without coordinates: y >>> >>> da.cumulative_integrate("x") <xarray.DataArray (x: 4, y: 3)> array([[0. , 0. , 0. ], [0.15, 0.25, 0.35], [4.65, 5.75, 6.85], [5.4 , 6.6 , 7.8 ]]) Coordinates: * x (x) float64 0.0 0.1 1.1 1.2 Dimensions without coordinates: y
- curvefit(coords: str | DataArray | Iterable[str | DataArray], func: Callable[..., Any], reduce_dims: Dims = None, skipna: bool = True, p0: Mapping[str, float | DataArray] | None = None, bounds: Mapping[str, tuple[float | DataArray, float | DataArray]] | None = None, param_names: Sequence[str] | None = None, errors: ErrorOptions = 'raise', kwargs: dict[str, Any] | None = None) Dataset #
Curve fitting optimization for arbitrary functions.
Wraps scipy.optimize.curve_fit with apply_ufunc.
- Parameters
coords (Hashable, DataArray, or sequence of DataArray or Hashable) – Independent coordinate(s) over which to perform the curve fitting. Must share at least one dimension with the calling object. When fitting multi-dimensional functions, supply coords as a sequence in the same order as arguments in func. To fit along existing dimensions of the calling object, coords can also be specified as a str or sequence of strs.
func (callable) – User specified function in the form f(x, *params) which returns a numpy array of length len(x). params are the fittable parameters which are optimized by scipy curve_fit. x can also be specified as a sequence containing multiple coordinates, e.g. f((x0, x1), *params).
reduce_dims (str, Iterable of Hashable or None, optional) – Additional dimension(s) over which to aggregate while fitting. For example, calling ds.curvefit(coords=’time’, reduce_dims=[‘lat’, ‘lon’], …) will aggregate all lat and lon points and fit the specified function along the time dimension.
skipna (bool, default: True) – Whether to skip missing values when fitting. Default is True.
p0 (dict-like or None, optional) – Optional dictionary of parameter names to initial guesses passed to the curve_fit p0 arg. If the values are DataArrays, they will be appropriately broadcast to the coordinates of the array. If none or only some parameters are passed, the rest will be assigned initial values following the default scipy behavior.
bounds (dict-like, optional) – Optional dictionary of parameter names to tuples of bounding values passed to the curve_fit bounds arg. If any of the bounds are DataArrays, they will be appropriately broadcast to the coordinates of the array. If none or only some parameters are passed, the rest will be unbounded following the default scipy behavior.
param_names (sequence of Hashable or None, optional) – Sequence of names for the fittable parameters of func. If not supplied, this will be automatically determined by arguments of func. param_names should be manually supplied when fitting a function that takes a variable number of parameters.
errors ({"raise", "ignore"}, default: "raise") – If ‘raise’, any errors from the scipy.optimize_curve_fit optimization will raise an exception. If ‘ignore’, the coefficients and covariances for the coordinates where the fitting failed will be NaN.
**kwargs (optional) – Additional keyword arguments to passed to scipy curve_fit.
- Returns
curvefit_results – A single dataset which contains:
- [var]_curvefit_coefficients
The coefficients of the best fit.
- [var]_curvefit_covariance
The covariance matrix of the coefficient estimates.
- Return type
Dataset
Examples
Generate some exponentially decaying data, where the decay constant and amplitude are different for different values of the coordinate
x
:>>> rng = np.random.default_rng(seed=0) >>> def exp_decay(t, time_constant, amplitude): ... return np.exp(-t / time_constant) * amplitude ... >>> t = np.arange(11) >>> da = xr.DataArray( ... np.stack( ... [ ... exp_decay(t, 1, 0.1), ... exp_decay(t, 2, 0.2), ... exp_decay(t, 3, 0.3), ... ] ... ) ... + rng.normal(size=(3, t.size)) * 0.01, ... coords={"x": [0, 1, 2], "time": t}, ... ) >>> da <xarray.DataArray (x: 3, time: 11)> array([[ 0.1012573 , 0.0354669 , 0.01993775, 0.00602771, -0.00352513, 0.00428975, 0.01328788, 0.009562 , -0.00700381, -0.01264187, -0.0062282 ], [ 0.20041326, 0.09805582, 0.07138797, 0.03216692, 0.01974438, 0.01097441, 0.00679441, 0.01015578, 0.01408826, 0.00093645, 0.01501222], [ 0.29334805, 0.21847449, 0.16305984, 0.11130396, 0.07164415, 0.04744543, 0.03602333, 0.03129354, 0.01074885, 0.01284436, 0.00910995]]) Coordinates: * x (x) int64 0 1 2 * time (time) int64 0 1 2 3 4 5 6 7 8 9 10
Fit the exponential decay function to the data along the
time
dimension:>>> fit_result = da.curvefit("time", exp_decay) >>> fit_result["curvefit_coefficients"].sel( ... param="time_constant" ... ) <xarray.DataArray 'curvefit_coefficients' (x: 3)> array([1.0569203, 1.7354963, 2.9421577]) Coordinates: * x (x) int64 0 1 2 param <U13 'time_constant' >>> fit_result["curvefit_coefficients"].sel(param="amplitude") <xarray.DataArray 'curvefit_coefficients' (x: 3)> array([0.1005489 , 0.19631423, 0.30003579]) Coordinates: * x (x) int64 0 1 2 param <U13 'amplitude'
An initial guess can also be given with the
p0
arg (although it does not make much of a difference in this simple example). To have a different guess for different coordinate points, the guess can be a DataArray. Here we use the same initial guess for the amplitude but different guesses for the time constant:>>> fit_result = da.curvefit( ... "time", ... exp_decay, ... p0={ ... "amplitude": 0.2, ... "time_constant": xr.DataArray([1, 2, 3], coords=[da.x]), ... }, ... ) >>> fit_result["curvefit_coefficients"].sel(param="time_constant") <xarray.DataArray 'curvefit_coefficients' (x: 3)> array([1.0569213 , 1.73550052, 2.94215733]) Coordinates: * x (x) int64 0 1 2 param <U13 'time_constant' >>> fit_result["curvefit_coefficients"].sel(param="amplitude") <xarray.DataArray 'curvefit_coefficients' (x: 3)> array([0.10054889, 0.1963141 , 0.3000358 ]) Coordinates: * x (x) int64 0 1 2 param <U13 'amplitude'
See also
DataArray.polyfit
,scipy.optimize.curve_fit
- property data: Any#
The DataArray’s data as an array. The underlying array type (e.g. dask, sparse, pint) is preserved.
See also
DataArray.to_numpy
,DataArray.as_numpy
,DataArray.values
- diff(dim: Hashable, n: int = 1, *, label: Literal['upper', 'lower'] = 'upper') Self #
Calculate the n-th order discrete difference along given axis.
- Parameters
dim (Hashable) – Dimension over which to calculate the finite difference.
n (int, default: 1) – The number of times values are differenced.
label ({"upper", "lower"}, default: "upper") – The new coordinate in dimension
dim
will have the values of either the minuend’s or subtrahend’s coordinate for values ‘upper’ and ‘lower’, respectively.
- Returns
difference – The n-th order finite difference of this object.
- Return type
DataArray
Notes
n matches numpy’s behavior and is different from pandas’ first argument named periods.
Examples
>>> arr = xr.DataArray([5, 5, 6, 6], [[1, 2, 3, 4]], ["x"]) >>> arr.diff("x") <xarray.DataArray (x: 3)> array([0, 1, 0]) Coordinates: * x (x) int64 2 3 4 >>> arr.diff("x", 2) <xarray.DataArray (x: 2)> array([ 1, -1]) Coordinates: * x (x) int64 3 4
See also
DataArray.differentiate
- differentiate(coord: Hashable, edge_order: Literal[1, 2] = 1, datetime_unit: DatetimeUnitOptions = None) Self #
Differentiate the array with the second order accurate central differences.
Note
This feature is limited to simple cartesian geometry, i.e. coord must be one dimensional.
- Parameters
coord (Hashable) – The coordinate to be used to compute the gradient.
edge_order ({1, 2}, default: 1) – N-th order accurate differences at the boundaries.
datetime_unit ({"W", "D", "h", "m", "s", "ms", "us", "ns", "ps", "fs", "as", None}, optional) – Unit to compute gradient. Only valid for datetime coordinate. “Y” and “M” are not available as datetime_unit.
- Returns
differentiated
- Return type
DataArray
See also
numpy.gradient
corresponding numpy function
Examples
>>> da = xr.DataArray( ... np.arange(12).reshape(4, 3), ... dims=["x", "y"], ... coords={"x": [0, 0.1, 1.1, 1.2]}, ... ) >>> da <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) float64 0.0 0.1 1.1 1.2 Dimensions without coordinates: y >>> >>> da.differentiate("x") <xarray.DataArray (x: 4, y: 3)> array([[30. , 30. , 30. ], [27.54545455, 27.54545455, 27.54545455], [27.54545455, 27.54545455, 27.54545455], [30. , 30. , 30. ]]) Coordinates: * x (x) float64 0.0 0.1 1.1 1.2 Dimensions without coordinates: y
- property dims: tuple[collections.abc.Hashable, ...]#
Tuple of dimension names associated with this array.
Note that the type of this property is inconsistent with Dataset.dims. See Dataset.sizes and DataArray.sizes for consistently named properties.
See also
DataArray.sizes
,Dataset.dims
- does_cover(bounds: Tuple[Tuple[float, float, float], Tuple[float, float, float]]) bool #
Check whether data fully covers specified by
bounds
spatial region. If data contains only one point along a given direction, then it is assumed the data is constant along that direction and coverage is not checked.- Parameters
bounds (Tuple[float, float, float], Tuple[float, float float]) – Min and max bounds packaged as
(minx, miny, minz), (maxx, maxy, maxz)
.- Returns
Full cover check outcome.
- Return type
bool
- dot(other: T_Xarray, dim: Dims = None) T_Xarray #
Perform dot product of two DataArrays along their shared dims.
Equivalent to taking taking tensordot over all shared dims.
- Parameters
other (DataArray) – The other array with which the dot product is performed.
dim (..., str, Iterable of Hashable or None, optional) – Which dimensions to sum over. Ellipsis (…) sums over all dimensions. If not specified, then all the common dimensions are summed over.
- Returns
result – Array resulting from the dot product over all shared dimensions.
- Return type
DataArray
See also
dot
,numpy.tensordot
Examples
>>> da_vals = np.arange(6 * 5 * 4).reshape((6, 5, 4)) >>> da = xr.DataArray(da_vals, dims=["x", "y", "z"]) >>> dm_vals = np.arange(4) >>> dm = xr.DataArray(dm_vals, dims=["z"])
>>> dm.dims ('z',)
>>> da.dims ('x', 'y', 'z')
>>> dot_result = da.dot(dm) >>> dot_result.dims ('x', 'y')
- drop(labels: Mapping[Any, Any] | None = None, dim: Hashable | None = None, *, errors: ErrorOptions = 'raise', **labels_kwargs) Self #
Backward compatible method based on drop_vars and drop_sel
Using either drop_vars or drop_sel is encouraged
See also
DataArray.drop_vars
,DataArray.drop_sel
- drop_duplicates(dim: Hashable | Iterable[Hashable], *, keep: Literal['first', 'last', False] = 'first') Self #
Returns a new DataArray with duplicate dimension values removed.
- Parameters
dim (dimension label or labels) – Pass … to drop duplicates along all dimensions.
keep ({"first", "last", False}, default: "first") –
Determines which duplicates (if any) to keep.
"first"
: Drop duplicates except for the first occurrence."last"
: Drop duplicates except for the last occurrence.False : Drop all duplicates.
- Return type
DataArray
See also
Dataset.drop_duplicates
Examples
>>> da = xr.DataArray( ... np.arange(25).reshape(5, 5), ... dims=("x", "y"), ... coords={"x": np.array([0, 0, 1, 2, 3]), "y": np.array([0, 1, 2, 3, 3])}, ... ) >>> da <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: * x (x) int64 0 0 1 2 3 * y (y) int64 0 1 2 3 3
>>> da.drop_duplicates(dim="x") <xarray.DataArray (x: 4, y: 5)> array([[ 0, 1, 2, 3, 4], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: * x (x) int64 0 1 2 3 * y (y) int64 0 1 2 3 3
>>> da.drop_duplicates(dim="x", keep="last") <xarray.DataArray (x: 4, y: 5)> array([[ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: * x (x) int64 0 1 2 3 * y (y) int64 0 1 2 3 3
Drop all duplicate dimension values:
>>> da.drop_duplicates(dim=...) <xarray.DataArray (x: 4, y: 4)> array([[ 0, 1, 2, 3], [10, 11, 12, 13], [15, 16, 17, 18], [20, 21, 22, 23]]) Coordinates: * x (x) int64 0 1 2 3 * y (y) int64 0 1 2 3
- drop_encoding() Self #
Return a new DataArray without encoding on the array or any attached coords.
- drop_indexes(coord_names: Hashable | Iterable[Hashable], *, errors: ErrorOptions = 'raise') Self #
Drop the indexes assigned to the given coordinates.
- Parameters
coord_names (hashable or iterable of hashable) – Name(s) of the coordinate(s) for which to drop the index.
errors ({"raise", "ignore"}, default: "raise") – If ‘raise’, raises a ValueError error if any of the coordinates passed have no index or are not in the dataset. If ‘ignore’, no error is raised.
- Returns
dropped – A new dataarray with dropped indexes.
- Return type
DataArray
- drop_isel(indexers: Mapping[Any, Any] | None = None, **indexers_kwargs) Self #
Drop index positions from this DataArray.
- Parameters
indexers (mapping of Hashable to Any or None, default: None) – Index locations to drop
**indexers_kwargs ({dim: position, ...}, optional) – The keyword arguments form of
dim
andpositions
- Returns
dropped
- Return type
DataArray
- Raises
IndexError –
Examples
>>> da = xr.DataArray(np.arange(25).reshape(5, 5), dims=("X", "Y")) >>> da <xarray.DataArray (X: 5, Y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Dimensions without coordinates: X, Y
>>> da.drop_isel(X=[0, 4], Y=2) <xarray.DataArray (X: 3, Y: 4)> array([[ 5, 6, 8, 9], [10, 11, 13, 14], [15, 16, 18, 19]]) Dimensions without coordinates: X, Y
>>> da.drop_isel({"X": 3, "Y": 3}) <xarray.DataArray (X: 4, Y: 4)> array([[ 0, 1, 2, 4], [ 5, 6, 7, 9], [10, 11, 12, 14], [20, 21, 22, 24]]) Dimensions without coordinates: X, Y
- drop_sel(labels: Mapping[Any, Any] | None = None, *, errors: ErrorOptions = 'raise', **labels_kwargs) Self #
Drop index labels from this DataArray.
- Parameters
labels (mapping of Hashable to Any) – Index labels to drop
errors ({"raise", "ignore"}, default: "raise") – If ‘raise’, raises a ValueError error if any of the index labels passed are not in the dataset. If ‘ignore’, any given labels that are in the dataset are dropped and no error is raised.
**labels_kwargs ({dim: label, ...}, optional) – The keyword arguments form of
dim
andlabels
- Returns
dropped
- Return type
DataArray
Examples
>>> da = xr.DataArray( ... np.arange(25).reshape(5, 5), ... coords={"x": np.arange(0, 9, 2), "y": np.arange(0, 13, 3)}, ... dims=("x", "y"), ... ) >>> da <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: * x (x) int64 0 2 4 6 8 * y (y) int64 0 3 6 9 12
>>> da.drop_sel(x=[0, 2], y=9) <xarray.DataArray (x: 3, y: 4)> array([[10, 11, 12, 14], [15, 16, 17, 19], [20, 21, 22, 24]]) Coordinates: * x (x) int64 4 6 8 * y (y) int64 0 3 6 12
>>> da.drop_sel({"x": 6, "y": [0, 3]}) <xarray.DataArray (x: 4, y: 3)> array([[ 2, 3, 4], [ 7, 8, 9], [12, 13, 14], [22, 23, 24]]) Coordinates: * x (x) int64 0 2 4 8 * y (y) int64 6 9 12
- drop_vars(names: str | Iterable[Hashable] | Callable[[Self], str | Iterable[Hashable]], *, errors: ErrorOptions = 'raise') Self #
Returns an array with dropped variables.
- Parameters
names (Hashable or iterable of Hashable or Callable) – Name(s) of variables to drop. If a Callable, this object is passed as its only argument and its result is used.
errors ({"raise", "ignore"}, default: "raise") – If ‘raise’, raises a ValueError error if any of the variable passed are not in the dataset. If ‘ignore’, any given names that are in the DataArray are dropped and no error is raised.
- Returns
dropped – New Dataset copied from self with variables removed.
- Return type
Dataset
Examples
>>> data = np.arange(12).reshape(4, 3) >>> da = xr.DataArray( ... data=data, ... dims=["x", "y"], ... coords={"x": [10, 20, 30, 40], "y": [70, 80, 90]}, ... ) >>> da <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) int64 10 20 30 40 * y (y) int64 70 80 90
Removing a single variable:
>>> da.drop_vars("x") <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * y (y) int64 70 80 90 Dimensions without coordinates: x
Removing a list of variables:
>>> da.drop_vars(["x", "y"]) <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Dimensions without coordinates: x, y
>>> da.drop_vars(lambda x: x.coords) <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Dimensions without coordinates: x, y
- dropna(dim: Hashable, *, how: Literal['any', 'all'] = 'any', thresh: int | None = None) Self #
Returns a new array with dropped labels for missing values along the provided dimension.
- Parameters
dim (Hashable) – Dimension along which to drop missing values. Dropping along multiple dimensions simultaneously is not yet supported.
how ({"any", "all"}, default: "any") –
any : if any NA values are present, drop that label
all : if all values are NA, drop that label
thresh (int or None, default: None) – If supplied, require this many non-NA values.
- Returns
dropped
- Return type
DataArray
Examples
>>> temperature = [ ... [0, 4, 2, 9], ... [np.nan, np.nan, np.nan, np.nan], ... [np.nan, 4, 2, 0], ... [3, 1, 0, 0], ... ] >>> da = xr.DataArray( ... data=temperature, ... dims=["Y", "X"], ... coords=dict( ... lat=("Y", np.array([-20.0, -20.25, -20.50, -20.75])), ... lon=("X", np.array([10.0, 10.25, 10.5, 10.75])), ... ), ... ) >>> da <xarray.DataArray (Y: 4, X: 4)> array([[ 0., 4., 2., 9.], [nan, nan, nan, nan], [nan, 4., 2., 0.], [ 3., 1., 0., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 lon (X) float64 10.0 10.25 10.5 10.75 Dimensions without coordinates: Y, X
>>> da.dropna(dim="Y", how="any") <xarray.DataArray (Y: 2, X: 4)> array([[0., 4., 2., 9.], [3., 1., 0., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.75 lon (X) float64 10.0 10.25 10.5 10.75 Dimensions without coordinates: Y, X
Drop values only if all values along the dimension are NaN:
>>> da.dropna(dim="Y", how="all") <xarray.DataArray (Y: 3, X: 4)> array([[ 0., 4., 2., 9.], [nan, 4., 2., 0.], [ 3., 1., 0., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.5 -20.75 lon (X) float64 10.0 10.25 10.5 10.75 Dimensions without coordinates: Y, X
- property dtype: numpy.dtype#
Data-type of the array’s elements.
See also
ndarray.dtype
,numpy.dtype
- property encoding: dict[Any, Any]#
Dictionary of format-specific settings for how this array should be serialized.
- equals(other: Self) bool #
True if two DataArrays have the same dimensions, coordinates and values; otherwise False.
DataArrays can still be equal (like pandas objects) if they have NaN values in the same locations.
This method is necessary because v1 == v2 for
DataArray
does element-wise comparisons (like numpy.ndarrays).- Parameters
other (DataArray) – DataArray to compare to.
- Returns
equal – True if the two DataArrays are equal.
- Return type
bool
See also
DataArray.broadcast_equals
,DataArray.identical
Examples
>>> a = xr.DataArray([1, 2, 3], dims="X") >>> b = xr.DataArray([1, 2, 3], dims="X", attrs=dict(units="m")) >>> c = xr.DataArray([1, 2, 3], dims="Y") >>> d = xr.DataArray([3, 2, 1], dims="X") >>> a <xarray.DataArray (X: 3)> array([1, 2, 3]) Dimensions without coordinates: X >>> b <xarray.DataArray (X: 3)> array([1, 2, 3]) Dimensions without coordinates: X Attributes: units: m >>> c <xarray.DataArray (Y: 3)> array([1, 2, 3]) Dimensions without coordinates: Y >>> d <xarray.DataArray (X: 3)> array([3, 2, 1]) Dimensions without coordinates: X
>>> a.equals(b) True >>> a.equals(c) False >>> a.equals(d) False
- expand_dims(dim: None | Hashable | Sequence[Hashable] | Mapping[Any, Any] = None, axis: None | int | Sequence[int] = None, **dim_kwargs: Any) Self #
Return a new object with an additional axis (or axes) inserted at the corresponding position in the array shape. The new object is a view into the underlying array, not a copy.
If dim is already a scalar coordinate, it will be promoted to a 1D coordinate consisting of a single value.
- Parameters
dim (Hashable, sequence of Hashable, dict, or None, optional) – Dimensions to include on the new variable. If provided as str or sequence of str, then dimensions are inserted with length 1. If provided as a dict, then the keys are the new dimensions and the values are either integers (giving the length of the new dimensions) or sequence/ndarray (giving the coordinates of the new dimensions).
axis (int, sequence of int, or None, default: None) – Axis position(s) where new axis is to be inserted (position(s) on the result array). If a sequence of integers is passed, multiple axes are inserted. In this case, dim arguments should be same length list. If axis=None is passed, all the axes will be inserted to the start of the result array.
**dim_kwargs (int or sequence or ndarray) – The keywords are arbitrary dimensions being inserted and the values are either the lengths of the new dims (if int is given), or their coordinates. Note, this is an alternative to passing a dict to the dim kwarg and will only be used if dim is None.
- Returns
expanded – This object, but with additional dimension(s).
- Return type
DataArray
See also
Dataset.expand_dims
Examples
>>> da = xr.DataArray(np.arange(5), dims=("x")) >>> da <xarray.DataArray (x: 5)> array([0, 1, 2, 3, 4]) Dimensions without coordinates: x
Add new dimension of length 2:
>>> da.expand_dims(dim={"y": 2}) <xarray.DataArray (y: 2, x: 5)> array([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]) Dimensions without coordinates: y, x
>>> da.expand_dims(dim={"y": 2}, axis=1) <xarray.DataArray (x: 5, y: 2)> array([[0, 0], [1, 1], [2, 2], [3, 3], [4, 4]]) Dimensions without coordinates: x, y
Add a new dimension with coordinates from array:
>>> da.expand_dims(dim={"y": np.arange(5)}, axis=0) <xarray.DataArray (y: 5, x: 5)> array([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]) Coordinates: * y (y) int64 0 1 2 3 4 Dimensions without coordinates: x
- ffill(dim: Hashable, limit: int | None = None) Self #
Fill NaN values by propagating values forward
Requires bottleneck.
- Parameters
dim (Hashable) – Specifies the dimension along which to propagate values when filling.
limit (int or None, default: None) – The maximum number of consecutive NaN values to forward fill. In other words, if there is a gap with more than this number of consecutive NaNs, it will only be partially filled. Must be greater than 0 or None for no limit. Must be None or greater than or equal to axis length if filling along chunked axes (dimensions).
- Returns
filled
- Return type
DataArray
Examples
>>> temperature = np.array( ... [ ... [np.nan, 1, 3], ... [0, np.nan, 5], ... [5, np.nan, np.nan], ... [3, np.nan, np.nan], ... [0, 2, 0], ... ] ... ) >>> da = xr.DataArray( ... data=temperature, ... dims=["Y", "X"], ... coords=dict( ... lat=("Y", np.array([-20.0, -20.25, -20.50, -20.75, -21.0])), ... lon=("X", np.array([10.0, 10.25, 10.5])), ... ), ... ) >>> da <xarray.DataArray (Y: 5, X: 3)> array([[nan, 1., 3.], [ 0., nan, 5.], [ 5., nan, nan], [ 3., nan, nan], [ 0., 2., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 -21.0 lon (X) float64 10.0 10.25 10.5 Dimensions without coordinates: Y, X
Fill all NaN values:
>>> da.ffill(dim="Y", limit=None) <xarray.DataArray (Y: 5, X: 3)> array([[nan, 1., 3.], [ 0., 1., 5.], [ 5., 1., 5.], [ 3., 1., 5.], [ 0., 2., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 -21.0 lon (X) float64 10.0 10.25 10.5 Dimensions without coordinates: Y, X
Fill only the first of consecutive NaN values:
>>> da.ffill(dim="Y", limit=1) <xarray.DataArray (Y: 5, X: 3)> array([[nan, 1., 3.], [ 0., 1., 5.], [ 5., nan, 5.], [ 3., nan, nan], [ 0., 2., 0.]]) Coordinates: lat (Y) float64 -20.0 -20.25 -20.5 -20.75 -21.0 lon (X) float64 10.0 10.25 10.5 Dimensions without coordinates: Y, X
- fillna(value: Any) Self #
Fill missing values in this object.
This operation follows the normal broadcasting and alignment rules that xarray uses for binary arithmetic, except the result is aligned to this object (
join='left'
) instead of aligned to the intersection of index coordinates (join='inner'
).- Parameters
value (scalar, ndarray or DataArray) – Used to fill all matching missing values in this array. If the argument is a DataArray, it is first aligned with (reindexed to) this array.
- Returns
filled
- Return type
DataArray
Examples
>>> da = xr.DataArray( ... np.array([1, 4, np.nan, 0, 3, np.nan]), ... dims="Z", ... coords=dict( ... Z=("Z", np.arange(6)), ... height=("Z", np.array([0, 10, 20, 30, 40, 50])), ... ), ... ) >>> da <xarray.DataArray (Z: 6)> array([ 1., 4., nan, 0., 3., nan]) Coordinates: * Z (Z) int64 0 1 2 3 4 5 height (Z) int64 0 10 20 30 40 50
Fill all NaN values with 0:
>>> da.fillna(0) <xarray.DataArray (Z: 6)> array([1., 4., 0., 0., 3., 0.]) Coordinates: * Z (Z) int64 0 1 2 3 4 5 height (Z) int64 0 10 20 30 40 50
Fill NaN values with corresponding values in array:
>>> da.fillna(np.array([2, 9, 4, 2, 8, 9])) <xarray.DataArray (Z: 6)> array([1., 4., 4., 0., 3., 9.]) Coordinates: * Z (Z) int64 0 1 2 3 4 5 height (Z) int64 0 10 20 30 40 50
- classmethod from_dict(d: Mapping[str, Any]) Self #
Convert a dictionary into an xarray.DataArray
- Parameters
d (dict) – Mapping with a minimum structure of {“dims”: […], “data”: […]}
- Returns
obj
- Return type
xarray.DataArray
See also
DataArray.to_dict
,Dataset.from_dict
Examples
>>> d = {"dims": "t", "data": [1, 2, 3]} >>> da = xr.DataArray.from_dict(d) >>> da <xarray.DataArray (t: 3)> array([1, 2, 3]) Dimensions without coordinates: t
>>> d = { ... "coords": { ... "t": {"dims": "t", "data": [0, 1, 2], "attrs": {"units": "s"}} ... }, ... "attrs": {"title": "air temperature"}, ... "dims": "t", ... "data": [10, 20, 30], ... "name": "a", ... } >>> da = xr.DataArray.from_dict(d) >>> da <xarray.DataArray 'a' (t: 3)> array([10, 20, 30]) Coordinates: * t (t) int64 0 1 2 Attributes: title: air temperature
- classmethod from_file(fname: str, group_path: str) tidy3d.components.data.data_array.DataArray #
Load an DataArray from an hdf5 file with a given path to the group.
- classmethod from_hdf5(fname: str, group_path: str) tidy3d.components.data.data_array.DataArray #
Load an DataArray from an hdf5 file with a given path to the group.
- classmethod from_iris(cube: iris_Cube) Self #
Convert a iris.cube.Cube into an xarray.DataArray
- classmethod from_series(series: pandas.core.series.Series, sparse: bool = False) xarray.core.dataarray.DataArray #
Convert a pandas.Series into an xarray.DataArray.
If the series’s index is a MultiIndex, it will be expanded into a tensor product of one-dimensional coordinates (filling in missing values with NaN). Thus this operation should be the inverse of the to_series method.
- Parameters
series (Series) – Pandas Series object to convert.
sparse (bool, default: False) – If sparse=True, creates a sparse array instead of a dense NumPy array. Requires the pydata/sparse package.
See also
DataArray.to_series
,Dataset.from_dataframe
- get_axis_num(dim: collections.abc.Hashable | collections.abc.Iterable[collections.abc.Hashable]) int | tuple[int, ...] #
Return axis number(s) corresponding to dimension(s) in this array.
- Parameters
dim (str or iterable of str) – Dimension name(s) for which to lookup axes.
- Returns
Axis number or numbers corresponding to the given dimensions.
- Return type
int or tuple of int
- get_index(key: collections.abc.Hashable) pandas.core.indexes.base.Index #
Get an index for a dimension, with fall-back to a default RangeIndex
- groupby(group: Hashable | DataArray | IndexVariable, squeeze: bool = True, restore_coord_dims: bool = False) DataArrayGroupBy #
Returns a DataArrayGroupBy object for performing grouped operations.
- Parameters
group (Hashable, DataArray or IndexVariable) – Array whose unique values should be used to group this array. If a Hashable, must be the name of a coordinate contained in this dataarray.
squeeze (bool, default: True) – If “group” is a dimension of any arrays in this dataset, squeeze controls whether the subarrays have a dimension of length 1 along that dimension or if the dimension is squeezed out.
restore_coord_dims (bool, default: False) – If True, also restore the dimension order of multi-dimensional coordinates.
- Returns
grouped – A DataArrayGroupBy object patterned after pandas.GroupBy that can be iterated over in the form of (unique_value, grouped_array) pairs.
- Return type
DataArrayGroupBy
Examples
Calculate daily anomalies for daily data:
>>> da = xr.DataArray( ... np.linspace(0, 1826, num=1827), ... coords=[pd.date_range("2000-01-01", "2004-12-31", freq="D")], ... dims="time", ... ) >>> da <xarray.DataArray (time: 1827)> array([0.000e+00, 1.000e+00, 2.000e+00, ..., 1.824e+03, 1.825e+03, 1.826e+03]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2004-12-31 >>> da.groupby("time.dayofyear") - da.groupby("time.dayofyear").mean("time") <xarray.DataArray (time: 1827)> array([-730.8, -730.8, -730.8, ..., 730.2, 730.2, 730.5]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2004-12-31 dayofyear (time) int64 1 2 3 4 5 6 7 8 ... 359 360 361 362 363 364 365 366
See also
- groupby
Users guide explanation of how to group and bin data.
- xarray-tutorial:intermediate/01-high-level-computation-patterns
Tutorial on
Groupby()
for windowed computation- xarray-tutorial:fundamentals/03.2_groupby_with_xarray
Tutorial on
Groupby()
demonstrating reductions, transformation and comparison withresample()
DataArray.groupby_bins Dataset.groupby core.groupby.DataArrayGroupBy DataArray.coarsen pandas.DataFrame.groupby Dataset.resample DataArray.resample
- groupby_bins(group: Hashable | DataArray | IndexVariable, bins: ArrayLike, right: bool = True, labels: ArrayLike | Literal[False] | None = None, precision: int = 3, include_lowest: bool = False, squeeze: bool = True, restore_coord_dims: bool = False) DataArrayGroupBy #
Returns a DataArrayGroupBy object for performing grouped operations.
Rather than using all unique values of group, the values are discretized first by applying pandas.cut [1]_ to group.
- Parameters
group (Hashable, DataArray or IndexVariable) – Array whose binned values should be used to group this array. If a Hashable, must be the name of a coordinate contained in this dataarray.
bins (int or array-like) – If bins is an int, it defines the number of equal-width bins in the range of x. However, in this case, the range of x is extended by .1% on each side to include the min or max values of x. If bins is a sequence it defines the bin edges allowing for non-uniform bin width. No extension of the range of x is done in this case.
right (bool, default: True) – Indicates whether the bins include the rightmost edge or not. If right == True (the default), then the bins [1,2,3,4] indicate (1,2], (2,3], (3,4].
labels (array-like, False or None, default: None) – Used as labels for the resulting bins. Must be of the same length as the resulting bins. If False, string bin labels are assigned by pandas.cut.
precision (int, default: 3) – The precision at which to store and display the bins labels.
include_lowest (bool, default: False) – Whether the first interval should be left-inclusive or not.
squeeze (bool, default: True) – If “group” is a dimension of any arrays in this dataset, squeeze controls whether the subarrays have a dimension of length 1 along that dimension or if the dimension is squeezed out.
restore_coord_dims (bool, default: False) – If True, also restore the dimension order of multi-dimensional coordinates.
- Returns
grouped – A DataArrayGroupBy object patterned after pandas.GroupBy that can be iterated over in the form of (unique_value, grouped_array) pairs. The name of the group has the added suffix _bins in order to distinguish it from the original variable.
- Return type
DataArrayGroupBy
See also
- groupby
Users guide explanation of how to group and bin data.
DataArray.groupby
,Dataset.groupby_bins
,core.groupby.DataArrayGroupBy
,pandas.DataFrame.groupby
References
- head(indexers: Mapping[Any, int] | int | None = None, **indexers_kwargs: Any) Self #
Return a new DataArray whose data is given by the the first n values along the specified dimension(s). Default n = 5
See also
Dataset.head
,DataArray.tail
,DataArray.thin
Examples
>>> da = xr.DataArray( ... np.arange(25).reshape(5, 5), ... dims=("x", "y"), ... ) >>> da <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Dimensions without coordinates: x, y
>>> da.head(x=1) <xarray.DataArray (x: 1, y: 5)> array([[0, 1, 2, 3, 4]]) Dimensions without coordinates: x, y
>>> da.head({"x": 2, "y": 2}) <xarray.DataArray (x: 2, y: 2)> array([[0, 1], [5, 6]]) Dimensions without coordinates: x, y
- identical(other: Self) bool #
Like equals, but also checks the array name and attributes, and attributes on all coordinates.
- Parameters
other (DataArray) – DataArray to compare to.
- Returns
equal – True if the two DataArrays are identical.
- Return type
bool
See also
DataArray.broadcast_equals
,DataArray.equals
Examples
>>> a = xr.DataArray([1, 2, 3], dims="X", attrs=dict(units="m"), name="Width") >>> b = xr.DataArray([1, 2, 3], dims="X", attrs=dict(units="m"), name="Width") >>> c = xr.DataArray([1, 2, 3], dims="X", attrs=dict(units="ft"), name="Width") >>> a <xarray.DataArray 'Width' (X: 3)> array([1, 2, 3]) Dimensions without coordinates: X Attributes: units: m >>> b <xarray.DataArray 'Width' (X: 3)> array([1, 2, 3]) Dimensions without coordinates: X Attributes: units: m >>> c <xarray.DataArray 'Width' (X: 3)> array([1, 2, 3]) Dimensions without coordinates: X Attributes: units: ft
>>> a.equals(b) True >>> a.identical(b) True
>>> a.equals(c) True >>> a.identical(c) False
- idxmax(dim: Hashable = None, *, skipna: bool | None = None, fill_value: Any = <NA>, keep_attrs: bool | None = None) Self #
Return the coordinate label of the maximum value along a dimension.
Returns a new DataArray named after the dimension with the values of the coordinate labels along that dimension corresponding to maximum values along that dimension.
In comparison to
argmax()
, this returns the coordinate label whileargmax()
returns the index.- Parameters
dim (Hashable, optional) – Dimension over which to apply idxmax. This is optional for 1D arrays, but required for arrays with 2 or more dimensions.
skipna (bool or None, default: None) – If True, skip missing values (as marked by NaN). By default, only skips missing values for
float
,complex
, andobject
dtypes; other dtypes either do not have a sentinel missing value (int
) orskipna=True
has not been implemented (datetime64
ortimedelta64
).fill_value (Any, default: NaN) – Value to be filled in case all of the values along a dimension are null. By default this is NaN. The fill value and result are automatically converted to a compatible dtype if possible. Ignored if
skipna
is False.keep_attrs (bool or None, optional) – If True, the attributes (
attrs
) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
- Returns
reduced – New DataArray object with idxmax applied to its data and the indicated dimension removed.
- Return type
DataArray
See also
Dataset.idxmax
,DataArray.idxmin
,DataArray.max
,DataArray.argmax
Examples
>>> array = xr.DataArray( ... [0, 2, 1, 0, -2], dims="x", coords={"x": ["a", "b", "c", "d", "e"]} ... ) >>> array.max() <xarray.DataArray ()> array(2) >>> array.argmax(...) {'x': <xarray.DataArray ()> array(1)} >>> array.idxmax() <xarray.DataArray 'x' ()> array('b', dtype='<U1')
>>> array = xr.DataArray( ... [ ... [2.0, 1.0, 2.0, 0.0, -2.0], ... [-4.0, np.nan, 2.0, np.nan, -2.0], ... [np.nan, np.nan, 1.0, np.nan, np.nan], ... ], ... dims=["y", "x"], ... coords={"y": [-1, 0, 1], "x": np.arange(5.0) ** 2}, ... ) >>> array.max(dim="x") <xarray.DataArray (y: 3)> array([2., 2., 1.]) Coordinates: * y (y) int64 -1 0 1 >>> array.argmax(dim="x") <xarray.DataArray (y: 3)> array([0, 2, 2]) Coordinates: * y (y) int64 -1 0 1 >>> array.idxmax(dim="x") <xarray.DataArray 'x' (y: 3)> array([0., 4., 4.]) Coordinates: * y (y) int64 -1 0 1
- idxmin(dim: Hashable | None = None, *, skipna: bool | None = None, fill_value: Any = <NA>, keep_attrs: bool | None = None) Self #
Return the coordinate label of the minimum value along a dimension.
Returns a new DataArray named after the dimension with the values of the coordinate labels along that dimension corresponding to minimum values along that dimension.
In comparison to
argmin()
, this returns the coordinate label whileargmin()
returns the index.- Parameters
dim (str, optional) – Dimension over which to apply idxmin. This is optional for 1D arrays, but required for arrays with 2 or more dimensions.
skipna (bool or None, default: None) – If True, skip missing values (as marked by NaN). By default, only skips missing values for
float
,complex
, andobject
dtypes; other dtypes either do not have a sentinel missing value (int
) orskipna=True
has not been implemented (datetime64
ortimedelta64
).fill_value (Any, default: NaN) – Value to be filled in case all of the values along a dimension are null. By default this is NaN. The fill value and result are automatically converted to a compatible dtype if possible. Ignored if
skipna
is False.keep_attrs (bool or None, optional) – If True, the attributes (
attrs
) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
- Returns
reduced – New DataArray object with idxmin applied to its data and the indicated dimension removed.
- Return type
DataArray
See also
Dataset.idxmin
,DataArray.idxmax
,DataArray.min
,DataArray.argmin
Examples
>>> array = xr.DataArray( ... [0, 2, 1, 0, -2], dims="x", coords={"x": ["a", "b", "c", "d", "e"]} ... ) >>> array.min() <xarray.DataArray ()> array(-2) >>> array.argmin(...) {'x': <xarray.DataArray ()> array(4)} >>> array.idxmin() <xarray.DataArray 'x' ()> array('e', dtype='<U1')
>>> array = xr.DataArray( ... [ ... [2.0, 1.0, 2.0, 0.0, -2.0], ... [-4.0, np.nan, 2.0, np.nan, -2.0], ... [np.nan, np.nan, 1.0, np.nan, np.nan], ... ], ... dims=["y", "x"], ... coords={"y": [-1, 0, 1], "x": np.arange(5.0) ** 2}, ... ) >>> array.min(dim="x") <xarray.DataArray (y: 3)> array([-2., -4., 1.]) Coordinates: * y (y) int64 -1 0 1 >>> array.argmin(dim="x") <xarray.DataArray (y: 3)> array([4, 0, 2]) Coordinates: * y (y) int64 -1 0 1 >>> array.idxmin(dim="x") <xarray.DataArray 'x' (y: 3)> array([16., 0., 4.]) Coordinates: * y (y) int64 -1 0 1
- property imag: Self#
The imaginary part of the array.
See also
numpy.ndarray.imag
- property indexes: xarray.core.indexes.Indexes#
Mapping of pandas.Index objects used for label based indexing.
Raises an error if this Dataset has indexes that cannot be coerced to pandas.Index objects.
See also
DataArray.xindexes
- integrate(coord: Hashable | Sequence[Hashable] = None, datetime_unit: DatetimeUnitOptions = None) Self #
Integrate along the given coordinate using the trapezoidal rule.
Note
This feature is limited to simple cartesian geometry, i.e. coord must be one dimensional.
- Parameters
coord (Hashable, or sequence of Hashable) – Coordinate(s) used for the integration.
datetime_unit ({'Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms', 'us', 'ns', 'ps', 'fs', 'as', None}, optional) – Specify the unit if a datetime coordinate is used.
- Returns
integrated
- Return type
DataArray
See also
Dataset.integrate
numpy.trapz
corresponding numpy function
Examples
>>> da = xr.DataArray( ... np.arange(12).reshape(4, 3), ... dims=["x", "y"], ... coords={"x": [0, 0.1, 1.1, 1.2]}, ... ) >>> da <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) float64 0.0 0.1 1.1 1.2 Dimensions without coordinates: y >>> >>> da.integrate("x") <xarray.DataArray (y: 3)> array([5.4, 6.6, 7.8]) Dimensions without coordinates: y
- interp(coords: Mapping[Any, Any] | None = None, method: InterpOptions = 'linear', assume_sorted: bool = False, kwargs: Mapping[str, Any] | None = None, **coords_kwargs: Any) Self #
Interpolate a DataArray onto new coordinates
Performs univariate or multivariate interpolation of a DataArray onto new coordinates using scipy’s interpolation routines. If interpolating along an existing dimension,
scipy.interpolate.interp1d
is called. When interpolating along multiple existing dimensions, an attempt is made to decompose the interpolation into multiple 1-dimensional interpolations. If this is possible,scipy.interpolate.interp1d
is called. Otherwise,scipy.interpolate.interpn()
is called.- Parameters
coords (dict, optional) – Mapping from dimension names to the new coordinates. New coordinate can be a scalar, array-like or DataArray. If DataArrays are passed as new coordinates, their dimensions are used for the broadcasting. Missing values are skipped.
method ({"linear", "nearest", "zero", "slinear", "quadratic", "cubic", "polynomial"}, default: "linear") –
The method used to interpolate. The method should be supported by the scipy interpolator:
interp1d
: {“linear”, “nearest”, “zero”, “slinear”, “quadratic”, “cubic”, “polynomial”}interpn
: {“linear”, “nearest”}
If
"polynomial"
is passed, theorder
keyword argument must also be provided.assume_sorted (bool, default: False) – If False, values of x can be in any order and they are sorted first. If True, x has to be an array of monotonically increasing values.
kwargs (dict-like or None, default: None) – Additional keyword arguments passed to scipy’s interpolator. Valid options and their behavior depend whether
interp1d
orinterpn
is used.**coords_kwargs ({dim: coordinate, ...}, optional) – The keyword arguments form of
coords
. One of coords or coords_kwargs must be provided.
- Returns
interpolated – New dataarray on the new coordinates.
- Return type
DataArray
Notes
scipy is required.
See also
scipy.interpolate.interp1d scipy.interpolate.interpn
- xarray-tutorial:fundamentals/02.2_manipulating_dimensions
Tutorial material on manipulating data resolution using
interp()
Examples
>>> da = xr.DataArray( ... data=[[1, 4, 2, 9], [2, 7, 6, np.nan], [6, np.nan, 5, 8]], ... dims=("x", "y"), ... coords={"x": [0, 1, 2], "y": [10, 12, 14, 16]}, ... ) >>> da <xarray.DataArray (x: 3, y: 4)> array([[ 1., 4., 2., 9.], [ 2., 7., 6., nan], [ 6., nan, 5., 8.]]) Coordinates: * x (x) int64 0 1 2 * y (y) int64 10 12 14 16
1D linear interpolation (the default):
>>> da.interp(x=[0, 0.75, 1.25, 1.75]) <xarray.DataArray (x: 4, y: 4)> array([[1. , 4. , 2. , nan], [1.75, 6.25, 5. , nan], [3. , nan, 5.75, nan], [5. , nan, 5.25, nan]]) Coordinates: * y (y) int64 10 12 14 16 * x (x) float64 0.0 0.75 1.25 1.75
1D nearest interpolation:
>>> da.interp(x=[0, 0.75, 1.25, 1.75], method="nearest") <xarray.DataArray (x: 4, y: 4)> array([[ 1., 4., 2., 9.], [ 2., 7., 6., nan], [ 2., 7., 6., nan], [ 6., nan, 5., 8.]]) Coordinates: * y (y) int64 10 12 14 16 * x (x) float64 0.0 0.75 1.25 1.75
1D linear extrapolation:
>>> da.interp( ... x=[1, 1.5, 2.5, 3.5], ... method="linear", ... kwargs={"fill_value": "extrapolate"}, ... ) <xarray.DataArray (x: 4, y: 4)> array([[ 2. , 7. , 6. , nan], [ 4. , nan, 5.5, nan], [ 8. , nan, 4.5, nan], [12. , nan, 3.5, nan]]) Coordinates: * y (y) int64 10 12 14 16 * x (x) float64 1.0 1.5 2.5 3.5
2D linear interpolation:
>>> da.interp(x=[0, 0.75, 1.25, 1.75], y=[11, 13, 15], method="linear") <xarray.DataArray (x: 4, y: 3)> array([[2.5 , 3. , nan], [4. , 5.625, nan], [ nan, nan, nan], [ nan, nan, nan]]) Coordinates: * x (x) float64 0.0 0.75 1.25 1.75 * y (y) int64 11 13 15
- interp_calendar(target: pd.DatetimeIndex | CFTimeIndex | DataArray, dim: str = 'time') Self #
Interpolates the DataArray to another calendar based on decimal year measure.
Each timestamp in source and target are first converted to their decimal year equivalent then source is interpolated on the target coordinate. The decimal year of a timestamp is its year plus its sub-year component converted to the fraction of its year. For example “2000-03-01 12:00” is 2000.1653 in a standard calendar or 2000.16301 in a “noleap” calendar.
This method should only be used when the time (HH:MM:SS) information of time coordinate is not important.
- Parameters
target (DataArray or DatetimeIndex or CFTimeIndex) – The target time coordinate of a valid dtype (np.datetime64 or cftime objects)
dim (str) – The time coordinate name.
- Returns
The source interpolated on the decimal years of target,
- Return type
DataArray
- interp_like(other: T_Xarray, method: InterpOptions = 'linear', assume_sorted: bool = False, kwargs: Mapping[str, Any] | None = None) Self #
Interpolate this object onto the coordinates of another object, filling out of range values with NaN.
If interpolating along a single existing dimension,
scipy.interpolate.interp1d
is called. When interpolating along multiple existing dimensions, an attempt is made to decompose the interpolation into multiple 1-dimensional interpolations. If this is possible,scipy.interpolate.interp1d
is called. Otherwise,scipy.interpolate.interpn()
is called.- Parameters
other (Dataset or DataArray) – Object with an ‘indexes’ attribute giving a mapping from dimension names to an 1d array-like, which provides coordinates upon which to index the variables in this dataset. Missing values are skipped.
method ({"linear", "nearest", "zero", "slinear", "quadratic", "cubic", "polynomial"}, default: "linear") –
The method used to interpolate. The method should be supported by the scipy interpolator:
{“linear”, “nearest”, “zero”, “slinear”, “quadratic”, “cubic”, “polynomial”} when
interp1d
is called.{“linear”, “nearest”} when
interpn
is called.
If
"polynomial"
is passed, theorder
keyword argument must also be provided.assume_sorted (bool, default: False) – If False, values of coordinates that are interpolated over can be in any order and they are sorted first. If True, interpolated coordinates are assumed to be an array of monotonically increasing values.
kwargs (dict, optional) – Additional keyword passed to scipy’s interpolator.
- Returns
interpolated – Another dataarray by interpolating this dataarray’s data along the coordinates of the other object.
- Return type
DataArray
Examples
>>> data = np.arange(12).reshape(4, 3) >>> da1 = xr.DataArray( ... data=data, ... dims=["x", "y"], ... coords={"x": [10, 20, 30, 40], "y": [70, 80, 90]}, ... ) >>> da1 <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) int64 10 20 30 40 * y (y) int64 70 80 90 >>> da2 = xr.DataArray( ... data=data, ... dims=["x", "y"], ... coords={"x": [10, 20, 29, 39], "y": [70, 80, 90]}, ... ) >>> da2 <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) int64 10 20 29 39 * y (y) int64 70 80 90
Interpolate the values in the coordinates of the other DataArray with respect to the source’s values:
>>> da2.interp_like(da1) <xarray.DataArray (x: 4, y: 3)> array([[0. , 1. , 2. ], [3. , 4. , 5. ], [6.3, 7.3, 8.3], [nan, nan, nan]]) Coordinates: * x (x) int64 10 20 30 40 * y (y) int64 70 80 90
Could also extrapolate missing values:
>>> da2.interp_like(da1, kwargs={"fill_value": "extrapolate"}) <xarray.DataArray (x: 4, y: 3)> array([[ 0. , 1. , 2. ], [ 3. , 4. , 5. ], [ 6.3, 7.3, 8.3], [ 9.3, 10.3, 11.3]]) Coordinates: * x (x) int64 10 20 30 40 * y (y) int64 70 80 90
Notes
scipy is required. If the dataarray has object-type coordinates, reindex is used for these coordinates instead of the interpolation.
See also
DataArray.interp
,DataArray.reindex_like
- interpolate_na(dim: Hashable | None = None, method: InterpOptions = 'linear', limit: int | None = None, use_coordinate: bool | str = True, max_gap: None | int | float | str | pd.Timedelta | np.timedelta64 | datetime.timedelta = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Fill in NaNs by interpolating according to different methods.
- Parameters
dim (Hashable or None, optional) – Specifies the dimension along which to interpolate.
method ({"linear", "nearest", "zero", "slinear", "quadratic", "cubic", "polynomial", "barycentric", "krogh", "pchip", "spline", "akima"}, default: "linear") –
String indicating which method to use for interpolation:
’linear’: linear interpolation. Additional keyword arguments are passed to
numpy.interp()
’nearest’, ‘zero’, ‘slinear’, ‘quadratic’, ‘cubic’, ‘polynomial’: are passed to
scipy.interpolate.interp1d()
. Ifmethod='polynomial'
, theorder
keyword argument must also be provided.’barycentric’, ‘krogh’, ‘pchip’, ‘spline’, ‘akima’: use their respective
scipy.interpolate
classes.
use_coordinate (bool or str, default: True) – Specifies which index to use as the x values in the interpolation formulated as y = f(x). If False, values are treated as if equally-spaced along
dim
. If True, the IndexVariable dim is used. Ifuse_coordinate
is a string, it specifies the name of a coordinate variable to use as the index.limit (int or None, default: None) – Maximum number of consecutive NaNs to fill. Must be greater than 0 or None for no limit. This filling is done regardless of the size of the gap in the data. To only interpolate over gaps less than a given length, see
max_gap
.max_gap (int, float, str, pandas.Timedelta, numpy.timedelta64, datetime.timedelta, default: None) –
Maximum size of gap, a continuous sequence of NaNs, that will be filled. Use None for no limit. When interpolating along a datetime64 dimension and
use_coordinate=True
,max_gap
can be one of the following:a string that is valid input for pandas.to_timedelta
a
numpy.timedelta64
objecta
pandas.Timedelta
objecta
datetime.timedelta
object
Otherwise,
max_gap
must be an int or a float. Use ofmax_gap
with unlabeled dimensions has not been implemented yet. Gap length is defined as the difference between coordinate values at the first data point after a gap and the last value before a gap. For gaps at the beginning (end), gap length is defined as the difference between coordinate values at the first (last) valid data point and the first (last) NaN. For example, consider:<xarray.DataArray (x: 9)> array([nan, nan, nan, 1., nan, nan, 4., nan, nan]) Coordinates: * x (x) int64 0 1 2 3 4 5 6 7 8
The gap lengths are 3-0 = 3; 6-3 = 3; and 8-6 = 2 respectively
keep_attrs (bool or None, default: None) – If True, the dataarray’s attributes (attrs) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
**kwargs (dict, optional) – parameters passed verbatim to the underlying interpolation function
- Returns
interpolated – Filled in DataArray.
- Return type
DataArray
See also
numpy.interp
,scipy.interpolate
Examples
>>> da = xr.DataArray( ... [np.nan, 2, 3, np.nan, 0], dims="x", coords={"x": [0, 1, 2, 3, 4]} ... ) >>> da <xarray.DataArray (x: 5)> array([nan, 2., 3., nan, 0.]) Coordinates: * x (x) int64 0 1 2 3 4
>>> da.interpolate_na(dim="x", method="linear") <xarray.DataArray (x: 5)> array([nan, 2. , 3. , 1.5, 0. ]) Coordinates: * x (x) int64 0 1 2 3 4
>>> da.interpolate_na(dim="x", method="linear", fill_value="extrapolate") <xarray.DataArray (x: 5)> array([1. , 2. , 3. , 1.5, 0. ]) Coordinates: * x (x) int64 0 1 2 3 4
- isel(indexers: Mapping[Any, Any] | None = None, drop: bool = False, missing_dims: ErrorOptionsWithWarn = 'raise', **indexers_kwargs: Any) Self #
Return a new DataArray whose data is given by selecting indexes along the specified dimension(s).
- Parameters
indexers (dict, optional) – A dict with keys matching dimensions and values given by integers, slice objects or arrays. indexer can be a integer, slice, array-like or DataArray. If DataArrays are passed as indexers, xarray-style indexing will be carried out. See indexing for the details. One of indexers or indexers_kwargs must be provided.
drop (bool, default: False) – If
drop=True
, drop coordinates variables indexed by integers instead of making them scalar.missing_dims ({"raise", "warn", "ignore"}, default: "raise") – What to do if dimensions that should be selected from are not present in the DataArray: - “raise”: raise an exception - “warn”: raise a warning, and ignore the missing dimensions - “ignore”: ignore the missing dimensions
**indexers_kwargs ({dim: indexer, ...}, optional) – The keyword arguments form of
indexers
.
- Returns
indexed
- Return type
xarray.DataArray
See also
Dataset.isel DataArray.sel
- xarray-tutorial:intermediate/indexing/indexing
Tutorial material on indexing with Xarray objects
- xarray-tutorial:fundamentals/02.1_indexing_Basic
Tutorial material on basics of indexing
Examples
>>> da = xr.DataArray(np.arange(25).reshape(5, 5), dims=("x", "y")) >>> da <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Dimensions without coordinates: x, y
>>> tgt_x = xr.DataArray(np.arange(0, 5), dims="points") >>> tgt_y = xr.DataArray(np.arange(0, 5), dims="points") >>> da = da.isel(x=tgt_x, y=tgt_y) >>> da <xarray.DataArray (points: 5)> array([ 0, 6, 12, 18, 24]) Dimensions without coordinates: points
- isin(test_elements: Any) Self #
Tests each value in the array for whether it is in test elements.
- Parameters
test_elements (array_like) – The values against which to test each value of element. This argument is flattened if an array or array_like. See numpy notes for behavior with non-array-like parameters.
- Returns
isin – Has the same type and shape as this object, but with a bool dtype.
- Return type
DataArray or Dataset
Examples
>>> array = xr.DataArray([1, 2, 3], dims="x") >>> array.isin([1, 3]) <xarray.DataArray (x: 3)> array([ True, False, True]) Dimensions without coordinates: x
See also
numpy.isin
- isnull(keep_attrs: bool | None = None) Self #
Test each value in the array for whether it is a missing value.
- Parameters
keep_attrs (bool or None, optional) – If True, the attributes (attrs) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
- Returns
isnull – Same type and shape as object, but the dtype of the data is bool.
- Return type
DataArray or Dataset
See also
pandas.isnull
Examples
>>> array = xr.DataArray([1, np.nan, 3], dims="x") >>> array <xarray.DataArray (x: 3)> array([ 1., nan, 3.]) Dimensions without coordinates: x >>> array.isnull() <xarray.DataArray (x: 3)> array([False, True, False]) Dimensions without coordinates: x
- item(*args)#
Copy an element of an array to a standard Python scalar and return it.
- Parameters
*args (Arguments (variable number and type)) –
none: in this case, the method only works for arrays with one element (a.size == 1), which element is copied into a standard Python scalar object and returned.
int_type: this argument is interpreted as a flat index into the array, specifying which element to copy and return.
tuple of int_types: functions as does a single int_type argument, except that the argument is interpreted as an nd-index into the array.
- Returns
z – A copy of the specified element of the array as a suitable Python scalar
- Return type
Standard Python scalar object
Notes
When the data type of a is longdouble or clongdouble, item() returns a scalar array object because there is no available Python scalar that would not lose information. Void arrays return a buffer object for item(), unless fields are defined, in which case a tuple is returned.
item is very similar to a[args], except, instead of an array scalar, a standard Python scalar is returned. This can be useful for speeding up access to elements of the array and doing arithmetic on elements of the array using Python’s optimized math.
Examples
>>> np.random.seed(123) >>> x = np.random.randint(9, size=(3, 3)) >>> x array([[2, 2, 6], [1, 3, 6], [1, 0, 1]]) >>> x.item(3) 1 >>> x.item(7) 0 >>> x.item((0, 1)) 2 >>> x.item((2, 2)) 1
- load(**kwargs) Self #
Manually trigger loading of this array’s data from disk or a remote source into memory and return this array.
Normally, it should not be necessary to call this method in user code, because all xarray functions should either work on deferred data or load data automatically. However, this method can be necessary when working with many file objects on disk.
- Parameters
**kwargs (dict) – Additional keyword arguments passed on to
dask.compute
.
See also
dask.compute
- property loc: xarray.core.dataarray._LocIndexer#
Attribute for location based indexing like pandas.
- map_blocks(func: Callable[..., T_Xarray], args: Sequence[Any] = (), kwargs: Mapping[str, Any] | None = None, template: DataArray | Dataset | None = None) T_Xarray #
Apply a function to each block of this DataArray.
Warning
This method is experimental and its signature may change.
- Parameters
func (callable) –
User-provided function that accepts a DataArray as its first parameter. The function will receive a subset or ‘block’ of this DataArray (see below), corresponding to one chunk along each chunked dimension.
func
will be executed asfunc(subset_dataarray, *subset_args, **kwargs)
.This function must return either a single DataArray or a single Dataset.
This function cannot add a new chunked dimension.
args (sequence) – Passed to func after unpacking and subsetting any xarray objects by blocks. xarray objects in args must be aligned with this object, otherwise an error is raised.
kwargs (mapping) – Passed verbatim to func after unpacking. xarray objects, if any, will not be subset to blocks. Passing dask collections in kwargs is not allowed.
template (DataArray or Dataset, optional) – xarray object representing the final result after compute is called. If not provided, the function will be first run on mocked-up data, that looks like this object but has sizes 0, to determine properties of the returned object such as dtype, variable names, attributes, new dimensions and new indexes (if any).
template
must be provided if the function changes the size of existing dimensions. When provided,attrs
on variables in template are copied over to the result. Anyattrs
set byfunc
will be ignored.
- Returns
A single DataArray or Dataset with dask backend, reassembled from the outputs of the
function.
Notes
This function is designed for when
func
needs to manipulate a whole xarray object subset to each block. Each block is loaded into memory. In the more common case wherefunc
can work on numpy arrays, it is recommended to useapply_ufunc
.If none of the variables in this object is backed by dask arrays, calling this function is equivalent to calling
func(obj, *args, **kwargs)
.See also
dask.array.map_blocks, xarray.apply_ufunc, xarray.Dataset.map_blocks xarray.DataArray.map_blocks
- xarray-tutorial:advanced/map_blocks/map_blocks
Advanced Tutorial on map_blocks with dask
Examples
Calculate an anomaly from climatology using
.groupby()
. Usingxr.map_blocks()
allows for parallel operations with knowledge ofxarray
, its indices, and its methods like.groupby()
.>>> def calculate_anomaly(da, groupby_type="time.month"): ... gb = da.groupby(groupby_type) ... clim = gb.mean(dim="time") ... return gb - clim ... >>> time = xr.cftime_range("1990-01", "1992-01", freq="ME") >>> month = xr.DataArray(time.month, coords={"time": time}, dims=["time"]) >>> np.random.seed(123) >>> array = xr.DataArray( ... np.random.rand(len(time)), ... dims=["time"], ... coords={"time": time, "month": month}, ... ).chunk() >>> array.map_blocks(calculate_anomaly, template=array).compute() <xarray.DataArray (time: 24)> array([ 0.12894847, 0.11323072, -0.0855964 , -0.09334032, 0.26848862, 0.12382735, 0.22460641, 0.07650108, -0.07673453, -0.22865714, -0.19063865, 0.0590131 , -0.12894847, -0.11323072, 0.0855964 , 0.09334032, -0.26848862, -0.12382735, -0.22460641, -0.07650108, 0.07673453, 0.22865714, 0.19063865, -0.0590131 ]) Coordinates: * time (time) object 1990-01-31 00:00:00 ... 1991-12-31 00:00:00 month (time) int64 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5 6 7 8 9 10 11 12
Note that one must explicitly use
args=[]
andkwargs={}
to pass arguments to the function being applied inxr.map_blocks()
:>>> array.map_blocks( ... calculate_anomaly, kwargs={"groupby_type": "time.year"}, template=array ... ) <xarray.DataArray (time: 24)> dask.array<<this-array>-calculate_anomaly, shape=(24,), dtype=float64, chunksize=(24,), chunktype=numpy.ndarray> Coordinates: * time (time) object 1990-01-31 00:00:00 ... 1991-12-31 00:00:00 month (time) int64 dask.array<chunksize=(24,), meta=np.ndarray>
- max(dim: Dims = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
max
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
max
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
max
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
max
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.max
,dask.array.max
,Dataset.max
- agg
User guide on reduction or aggregation operations.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.max() <xarray.DataArray ()> array(3.)
Use
skipna
to control whether NaNs are ignored.>>> da.max(skipna=False) <xarray.DataArray ()> array(nan)
- mean(dim: Dims = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
mean
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
mean
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
mean
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
mean
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.mean
,dask.array.mean
,Dataset.mean
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.mean() <xarray.DataArray ()> array(1.6)
Use
skipna
to control whether NaNs are ignored.>>> da.mean(skipna=False) <xarray.DataArray ()> array(nan)
- median(dim: Dims = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
median
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
median
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
median
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
median
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.median
,dask.array.median
,Dataset.median
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.median() <xarray.DataArray ()> array(2.)
Use
skipna
to control whether NaNs are ignored.>>> da.median(skipna=False) <xarray.DataArray ()> array(nan)
- min(dim: Dims = None, *, skipna: bool | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
min
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
min
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
min
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
min
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.min
,dask.array.min
,Dataset.min
- agg
User guide on reduction or aggregation operations.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.min() <xarray.DataArray ()> array(0.)
Use
skipna
to control whether NaNs are ignored.>>> da.min(skipna=False) <xarray.DataArray ()> array(nan)
- multiply_at(value: complex, coord_name: str, indices: List[int]) tidy3d.components.data.data_array.DataArray #
Multiply self by value at indices into .
- property name: collections.abc.Hashable | None#
The name of this array.
- property nbytes: int#
Total bytes consumed by the elements of this DataArray’s data.
If the underlying data array does not include
nbytes
, estimates the bytes consumed based on thesize
anddtype
.
- property ndim: int#
Number of array dimensions.
See also
numpy.ndarray.ndim
- notnull(keep_attrs: bool | None = None) Self #
Test each value in the array for whether it is not a missing value.
- Parameters
keep_attrs (bool or None, optional) – If True, the attributes (attrs) will be copied from the original object to the new one. If False, the new object will be returned without attributes.
- Returns
notnull – Same type and shape as object, but the dtype of the data is bool.
- Return type
DataArray or Dataset
See also
pandas.notnull
Examples
>>> array = xr.DataArray([1, np.nan, 3], dims="x") >>> array <xarray.DataArray (x: 3)> array([ 1., nan, 3.]) Dimensions without coordinates: x >>> array.notnull() <xarray.DataArray (x: 3)> array([ True, False, True]) Dimensions without coordinates: x
- pad(pad_width: Mapping[Any, int | tuple[int, int]] | None = None, mode: PadModeOptions = 'constant', stat_length: int | tuple[int, int] | Mapping[Any, tuple[int, int]] | None = None, constant_values: float | tuple[float, float] | Mapping[Any, tuple[float, float]] | None = None, end_values: int | tuple[int, int] | Mapping[Any, tuple[int, int]] | None = None, reflect_type: PadReflectOptions = None, keep_attrs: bool | None = None, **pad_width_kwargs: Any) Self #
Pad this array along one or more dimensions.
Warning
This function is experimental and its behaviour is likely to change especially regarding padding of dimension coordinates (or IndexVariables).
When using one of the modes (“edge”, “reflect”, “symmetric”, “wrap”), coordinates will be padded with the same mode, otherwise coordinates are padded using the “constant” mode with fill_value dtypes.NA.
- Parameters
pad_width (mapping of Hashable to tuple of int) – Mapping with the form of {dim: (pad_before, pad_after)} describing the number of values padded along each dimension. {dim: pad} is a shortcut for pad_before = pad_after = pad
mode ({"constant", "edge", "linear_ramp", "maximum", "mean", "median", "minimum", "reflect", "symmetric", "wrap"}, default: "constant") –
How to pad the DataArray (taken from numpy docs):
”constant”: Pads with a constant value.
”edge”: Pads with the edge values of array.
”linear_ramp”: Pads with the linear ramp between end_value and the array edge value.
”maximum”: Pads with the maximum value of all or part of the vector along each axis.
”mean”: Pads with the mean value of all or part of the vector along each axis.
”median”: Pads with the median value of all or part of the vector along each axis.
”minimum”: Pads with the minimum value of all or part of the vector along each axis.
”reflect”: Pads with the reflection of the vector mirrored on the first and last values of the vector along each axis.
”symmetric”: Pads with the reflection of the vector mirrored along the edge of the array.
”wrap”: Pads with the wrap of the vector along the axis. The first values are used to pad the end and the end values are used to pad the beginning.
stat_length (int, tuple or mapping of Hashable to tuple, default: None) – Used in ‘maximum’, ‘mean’, ‘median’, and ‘minimum’. Number of values at edge of each axis used to calculate the statistic value. {dim_1: (before_1, after_1), … dim_N: (before_N, after_N)} unique statistic lengths along each dimension. ((before, after),) yields same before and after statistic lengths for each dimension. (stat_length,) or int is a shortcut for before = after = statistic length for all axes. Default is
None
, to use the entire axis.constant_values (scalar, tuple or mapping of Hashable to tuple, default: 0) – Used in ‘constant’. The values to set the padded values for each axis.
{dim_1: (before_1, after_1), ... dim_N: (before_N, after_N)}
unique pad constants along each dimension.((before, after),)
yields same before and after constants for each dimension.(constant,)
orconstant
is a shortcut forbefore = after = constant
for all dimensions. Default is 0.end_values (scalar, tuple or mapping of Hashable to tuple, default: 0) – Used in ‘linear_ramp’. The values used for the ending value of the linear_ramp and that will form the edge of the padded array.
{dim_1: (before_1, after_1), ... dim_N: (before_N, after_N)}
unique end values along each dimension.((before, after),)
yields same before and after end values for each axis.(constant,)
orconstant
is a shortcut forbefore = after = constant
for all axes. Default is 0.reflect_type ({"even", "odd", None}, optional) – Used in “reflect”, and “symmetric”. The “even” style is the default with an unaltered reflection around the edge value. For the “odd” style, the extended part of the array is created by subtracting the reflected values from two times the edge value.
keep_attrs (bool or None, optional) – If True, the attributes (
attrs
) will be copied from the original object to the new one. If False, the new object will be returned without attributes.**pad_width_kwargs – The keyword arguments form of
pad_width
. One ofpad_width
orpad_width_kwargs
must be provided.
- Returns
padded – DataArray with the padded coordinates and data.
- Return type
DataArray
See also
DataArray.shift
,DataArray.roll
,DataArray.bfill
,DataArray.ffill
,numpy.pad
,dask.array.pad
Notes
For
mode="constant"
andconstant_values=None
, integer types will be promoted tofloat
and padded withnp.nan
.Padding coordinates will drop their corresponding index (if any) and will reset default indexes for dimension coordinates.
Examples
>>> arr = xr.DataArray([5, 6, 7], coords=[("x", [0, 1, 2])]) >>> arr.pad(x=(1, 2), constant_values=0) <xarray.DataArray (x: 6)> array([0, 5, 6, 7, 0, 0]) Coordinates: * x (x) float64 nan 0.0 1.0 2.0 nan nan
>>> da = xr.DataArray( ... [[0, 1, 2, 3], [10, 11, 12, 13]], ... dims=["x", "y"], ... coords={"x": [0, 1], "y": [10, 20, 30, 40], "z": ("x", [100, 200])}, ... ) >>> da.pad(x=1) <xarray.DataArray (x: 4, y: 4)> array([[nan, nan, nan, nan], [ 0., 1., 2., 3.], [10., 11., 12., 13.], [nan, nan, nan, nan]]) Coordinates: * x (x) float64 nan 0.0 1.0 nan * y (y) int64 10 20 30 40 z (x) float64 nan 100.0 200.0 nan
Careful,
constant_values
are coerced to the data type of the array which may lead to a loss of precision:>>> da.pad(x=1, constant_values=1.23456789) <xarray.DataArray (x: 4, y: 4)> array([[ 1, 1, 1, 1], [ 0, 1, 2, 3], [10, 11, 12, 13], [ 1, 1, 1, 1]]) Coordinates: * x (x) float64 nan 0.0 1.0 nan * y (y) int64 10 20 30 40 z (x) float64 nan 100.0 200.0 nan
- persist(**kwargs) Self #
Trigger computation in constituent dask arrays
This keeps them as dask arrays but encourages them to keep data in memory. This is particularly useful when on a distributed machine. When on a single machine consider using
.compute()
instead.- Parameters
**kwargs (dict) – Additional keyword arguments passed on to
dask.persist
.
See also
dask.persist
- pipe(func: Union[Callable[[...], xarray.core.common.T], tuple[Callable[..., T], str]], *args: Any, **kwargs: Any) xarray.core.common.T #
Apply
func(self, *args, **kwargs)
This method replicates the pandas method of the same name.
- Parameters
func (callable) – function to apply to this xarray object (Dataset/DataArray).
args
, andkwargs
are passed intofunc
. Alternatively a(callable, data_keyword)
tuple wheredata_keyword
is a string indicating the keyword ofcallable
that expects the xarray object.*args – positional arguments passed into
func
.**kwargs – a dictionary of keyword arguments passed into
func
.
- Returns
object – the return type of
func
.- Return type
Any
Notes
Use
.pipe
when chaining together functions that expect xarray or pandas objects, e.g., instead of writingf(g(h(ds), arg1=a), arg2=b, arg3=c)
You can write
(ds.pipe(h).pipe(g, arg1=a).pipe(f, arg2=b, arg3=c))
If you have a function that takes the data as (say) the second argument, pass a tuple indicating which keyword expects the data. For example, suppose
f
takes its data asarg2
:(ds.pipe(h).pipe(g, arg1=a).pipe((f, "arg2"), arg1=a, arg3=c))
Examples
>>> x = xr.Dataset( ... { ... "temperature_c": ( ... ("lat", "lon"), ... 20 * np.random.rand(4).reshape(2, 2), ... ), ... "precipitation": (("lat", "lon"), np.random.rand(4).reshape(2, 2)), ... }, ... coords={"lat": [10, 20], "lon": [150, 160]}, ... ) >>> x <xarray.Dataset> Dimensions: (lat: 2, lon: 2) Coordinates: * lat (lat) int64 10 20 * lon (lon) int64 150 160 Data variables: temperature_c (lat, lon) float64 10.98 14.3 12.06 10.9 precipitation (lat, lon) float64 0.4237 0.6459 0.4376 0.8918
>>> def adder(data, arg): ... return data + arg ... >>> def div(data, arg): ... return data / arg ... >>> def sub_mult(data, sub_arg, mult_arg): ... return (data * mult_arg) - sub_arg ... >>> x.pipe(adder, 2) <xarray.Dataset> Dimensions: (lat: 2, lon: 2) Coordinates: * lat (lat) int64 10 20 * lon (lon) int64 150 160 Data variables: temperature_c (lat, lon) float64 12.98 16.3 14.06 12.9 precipitation (lat, lon) float64 2.424 2.646 2.438 2.892
>>> x.pipe(adder, arg=2) <xarray.Dataset> Dimensions: (lat: 2, lon: 2) Coordinates: * lat (lat) int64 10 20 * lon (lon) int64 150 160 Data variables: temperature_c (lat, lon) float64 12.98 16.3 14.06 12.9 precipitation (lat, lon) float64 2.424 2.646 2.438 2.892
>>> ( ... x.pipe(adder, arg=2) ... .pipe(div, arg=2) ... .pipe(sub_mult, sub_arg=2, mult_arg=2) ... ) <xarray.Dataset> Dimensions: (lat: 2, lon: 2) Coordinates: * lat (lat) int64 10 20 * lon (lon) int64 150 160 Data variables: temperature_c (lat, lon) float64 10.98 14.3 12.06 10.9 precipitation (lat, lon) float64 0.4237 0.6459 0.4376 0.8918
See also
pandas.DataFrame.pipe
- plot#
alias of
xarray.plot.accessor.DataArrayPlotAccessor
- polyfit(dim: collections.abc.Hashable, deg: int, skipna: Optional[bool] = None, rcond: Optional[float] = None, w: Optional[Union[collections.abc.Hashable, Any]] = None, full: bool = False, cov: Union[bool, Literal['unscaled']] = False) xarray.core.dataset.Dataset #
Least squares polynomial fit.
This replicates the behaviour of numpy.polyfit but differs by skipping invalid values when skipna = True.
- Parameters
dim (Hashable) – Coordinate along which to fit the polynomials.
deg (int) – Degree of the fitting polynomial.
skipna (bool or None, optional) – If True, removes all invalid values before fitting each 1D slices of the array. Default is True if data is stored in a dask.array or if there is any invalid values, False otherwise.
rcond (float or None, optional) – Relative condition number to the fit.
w (Hashable, array-like or None, optional) – Weights to apply to the y-coordinate of the sample points. Can be an array-like object or the name of a coordinate in the dataset.
full (bool, default: False) – Whether to return the residuals, matrix rank and singular values in addition to the coefficients.
cov (bool or "unscaled", default: False) – Whether to return to the covariance matrix in addition to the coefficients. The matrix is not scaled if cov=’unscaled’.
- Returns
polyfit_results – A single dataset which contains:
- polyfit_coefficients
The coefficients of the best fit.
- polyfit_residuals
The residuals of the least-square computation (only included if full=True). When the matrix rank is deficient, np.nan is returned.
- [dim]_matrix_rank
The effective rank of the scaled Vandermonde coefficient matrix (only included if full=True)
- [dim]_singular_value
The singular values of the scaled Vandermonde coefficient matrix (only included if full=True)
- polyfit_covariance
The covariance matrix of the polynomial coefficient estimates (only included if full=False and cov=True)
- Return type
Dataset
See also
numpy.polyfit
,numpy.polyval
,xarray.polyval
,DataArray.curvefit
- prod(dim: Dims = None, *, skipna: bool | None = None, min_count: int | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
prod
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
prod
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).min_count (int or None, optional) – The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Only used if skipna is set to True or defaults to True for the array’s dtype. Changed in version 0.17.0: if specified on an integer array and skipna=True, the result will be a float array.
keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
prod
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
prod
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.prod
,dask.array.prod
,Dataset.prod
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.prod() <xarray.DataArray ()> array(0.)
Use
skipna
to control whether NaNs are ignored.>>> da.prod(skipna=False) <xarray.DataArray ()> array(nan)
Specify
min_count
for finer control over when NaNs are ignored.>>> da.prod(skipna=True, min_count=2) <xarray.DataArray ()> array(0.)
- quantile(q: ArrayLike, dim: Dims = None, *, method: QuantileMethods = 'linear', keep_attrs: bool | None = None, skipna: bool | None = None, interpolation: QuantileMethods | None = None) Self #
Compute the qth quantile of the data along the specified dimension.
Returns the qth quantiles(s) of the array elements.
- Parameters
q (float or array-like of float) – Quantile to compute, which must be between 0 and 1 inclusive.
dim (str or Iterable of Hashable, optional) – Dimension(s) over which to apply quantile.
method (str, default: "linear") –
This optional parameter specifies the interpolation method to use when the desired quantile lies between two data points. The options sorted by their R type as summarized in the H&F paper [1]_ are:
”inverted_cdf”
”averaged_inverted_cdf”
”closest_observation”
”interpolated_inverted_cdf”
”hazen”
”weibull”
”linear” (default)
”median_unbiased”
”normal_unbiased”
The first three methods are discontiuous. The following discontinuous variations of the default “linear” (7.) option are also available:
”lower”
”higher”
”midpoint”
”nearest”
See
numpy.quantile()
or [1]_ for details. The “method” argument was previously called “interpolation”, renamed in accordance with numpy version 1.22.0.keep_attrs (bool or None, optional) – If True, the dataset’s attributes (attrs) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes.
skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or skipna=True has not been implemented (object, datetime64 or timedelta64).
- Returns
quantiles – If q is a single quantile, then the result is a scalar. If multiple percentiles are given, first axis of the result corresponds to the quantile and a quantile dimension is added to the return array. The other dimensions are the dimensions that remain after the reduction of the array.
- Return type
DataArray
See also
numpy.nanquantile
,numpy.quantile
,pandas.Series.quantile
,Dataset.quantile
Examples
>>> da = xr.DataArray( ... data=[[0.7, 4.2, 9.4, 1.5], [6.5, 7.3, 2.6, 1.9]], ... coords={"x": [7, 9], "y": [1, 1.5, 2, 2.5]}, ... dims=("x", "y"), ... ) >>> da.quantile(0) # or da.quantile(0, dim=...) <xarray.DataArray ()> array(0.7) Coordinates: quantile float64 0.0 >>> da.quantile(0, dim="x") <xarray.DataArray (y: 4)> array([0.7, 4.2, 2.6, 1.5]) Coordinates: * y (y) float64 1.0 1.5 2.0 2.5 quantile float64 0.0 >>> da.quantile([0, 0.5, 1]) <xarray.DataArray (quantile: 3)> array([0.7, 3.4, 9.4]) Coordinates: * quantile (quantile) float64 0.0 0.5 1.0 >>> da.quantile([0, 0.5, 1], dim="x") <xarray.DataArray (quantile: 3, y: 4)> array([[0.7 , 4.2 , 2.6 , 1.5 ], [3.6 , 5.75, 6. , 1.7 ], [6.5 , 7.3 , 9.4 , 1.9 ]]) Coordinates: * y (y) float64 1.0 1.5 2.0 2.5 * quantile (quantile) float64 0.0 0.5 1.0
References
- 1
R. J. Hyndman and Y. Fan, “Sample quantiles in statistical packages,” The American Statistician, 50(4), pp. 361-365, 1996
- query(queries: Mapping[Any, Any] | None = None, parser: QueryParserOptions = 'pandas', engine: QueryEngineOptions = None, missing_dims: ErrorOptionsWithWarn = 'raise', **queries_kwargs: Any) DataArray #
Return a new data array indexed along the specified dimension(s), where the indexers are given as strings containing Python expressions to be evaluated against the values in the array.
- Parameters
queries (dict-like or None, optional) – A dict-like with keys matching dimensions and values given by strings containing Python expressions to be evaluated against the data variables in the dataset. The expressions will be evaluated using the pandas eval() function, and can contain any valid Python expressions but cannot contain any Python statements.
parser ({"pandas", "python"}, default: "pandas") – The parser to use to construct the syntax tree from the expression. The default of ‘pandas’ parses code slightly different than standard Python. Alternatively, you can parse an expression using the ‘python’ parser to retain strict Python semantics.
engine ({"python", "numexpr", None}, default: None) –
The engine used to evaluate the expression. Supported engines are:
None: tries to use numexpr, falls back to python
”numexpr”: evaluates expressions using numexpr
”python”: performs operations as if you had eval’d in top level python
missing_dims ({"raise", "warn", "ignore"}, default: "raise") –
What to do if dimensions that should be selected from are not present in the DataArray:
”raise”: raise an exception
”warn”: raise a warning, and ignore the missing dimensions
”ignore”: ignore the missing dimensions
**queries_kwargs ({dim: query, ...}, optional) – The keyword arguments form of
queries
. One of queries or queries_kwargs must be provided.
- Returns
obj – A new DataArray with the same contents as this dataset, indexed by the results of the appropriate queries.
- Return type
DataArray
See also
DataArray.isel
,Dataset.query
,pandas.eval
Examples
>>> da = xr.DataArray(np.arange(0, 5, 1), dims="x", name="a") >>> da <xarray.DataArray 'a' (x: 5)> array([0, 1, 2, 3, 4]) Dimensions without coordinates: x >>> da.query(x="a > 2") <xarray.DataArray 'a' (x: 2)> array([3, 4]) Dimensions without coordinates: x
- rank(dim: Hashable, *, pct: bool = False, keep_attrs: bool | None = None) Self #
Ranks the data.
Equal values are assigned a rank that is the average of the ranks that would have been otherwise assigned to all of the values within that set. Ranks begin at 1, not 0. If pct, computes percentage ranks.
NaNs in the input array are returned as NaNs.
The bottleneck library is required.
- Parameters
dim (Hashable) – Dimension over which to compute rank.
pct (bool, default: False) – If True, compute percentage ranks, otherwise compute integer ranks.
keep_attrs (bool or None, optional) – If True, the dataset’s attributes (attrs) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes.
- Returns
ranked – DataArray with the same coordinates and dtype ‘float64’.
- Return type
DataArray
Examples
>>> arr = xr.DataArray([5, 6, 7], dims="x") >>> arr.rank("x") <xarray.DataArray (x: 3)> array([1., 2., 3.]) Dimensions without coordinates: x
- property real: Self#
The real part of the array.
See also
numpy.ndarray.real
- reduce(func: Callable[..., Any], dim: Dims = None, *, axis: int | Sequence[int] | None = None, keep_attrs: bool | None = None, keepdims: bool = False, **kwargs: Any) Self #
Reduce this array by applying func along some dimension(s).
- Parameters
func (callable) – Function which can be called in the form f(x, axis=axis, **kwargs) to return the result of reducing an np.ndarray over an integer valued axis.
dim ("...", str, Iterable of Hashable or None, optional) – Dimension(s) over which to apply func. By default func is applied over all dimensions.
axis (int or sequence of int, optional) – Axis(es) over which to repeatedly apply func. Only one of the ‘dim’ and ‘axis’ arguments can be supplied. If neither are supplied, then the reduction is calculated over the flattened array (by calling f(x) without an axis argument).
keep_attrs (bool or None, optional) – If True, the variable’s attributes (attrs) will be copied from the original object to the new one. If False (default), the new object will be returned without attributes.
keepdims (bool, default: False) – If True, the dimensions which are reduced are left in the result as dimensions of size one. Coordinates that use these dimensions are removed.
**kwargs (dict) – Additional keyword arguments passed on to func.
- Returns
reduced – DataArray with this object’s array replaced with an array with summarized data and the indicated dimension(s) removed.
- Return type
DataArray
- reflect(axis: Literal[0, 1, 2], center: float) tidy3d.components.data.data_array.SpatialDataArray #
Reflect data across the plane define by parameters
axis
andcenter
from right to left.- Parameters
axis (Literal[0, 1, 2]) – Normal direction of the reflection plane.
center (float) – Location of the reflection plane along its normal direction.
- Returns
Data after reflection is performed.
- Return type
- reindex(indexers: Mapping[Any, Any] | None = None, *, method: ReindexMethodOptions = None, tolerance: float | Iterable[float] | None = None, copy: bool = True, fill_value=<NA>, **indexers_kwargs: Any) Self #
Conform this object onto the indexes of another object, filling in missing values with
fill_value
. The default fill value is NaN.- Parameters
indexers (dict, optional) – Dictionary with keys given by dimension names and values given by arrays of coordinates tick labels. Any mis-matched coordinate values will be filled in with NaN, and any mis-matched dimension names will simply be ignored. One of indexers or indexers_kwargs must be provided.
copy (bool, optional) – If
copy=True
, data in the return value is always copied. Ifcopy=False
and reindexing is unnecessary, or can be performed with only slice operations, then the output may share memory with the input. In either case, a new xarray object is always returned.method ({None, 'nearest', 'pad'/'ffill', 'backfill'/'bfill'}, optional) –
Method to use for filling index values in
indexers
not found on this data array:None (default): don’t fill gaps
pad / ffill: propagate last valid index value forward
backfill / bfill: propagate next valid index value backward
nearest: use nearest valid index value
tolerance (float | Iterable[float] | None, default: None) – Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation
abs(index[indexer] - target) <= tolerance
. Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like must be the same size as the index and its dtype must exactly match the index’s type.fill_value (scalar or dict-like, optional) – Value to use for newly missing values. If a dict-like, maps variable names (including coordinates) to fill values. Use this data array’s name to refer to the data array’s values.
**indexers_kwargs ({dim: indexer, ...}, optional) – The keyword arguments form of
indexers
. One of indexers or indexers_kwargs must be provided.
- Returns
reindexed – Another dataset array, with this array’s data but replaced coordinates.
- Return type
DataArray
Examples
Reverse latitude:
>>> da = xr.DataArray( ... np.arange(4), ... coords=[np.array([90, 89, 88, 87])], ... dims="lat", ... ) >>> da <xarray.DataArray (lat: 4)> array([0, 1, 2, 3]) Coordinates: * lat (lat) int64 90 89 88 87 >>> da.reindex(lat=da.lat[::-1]) <xarray.DataArray (lat: 4)> array([3, 2, 1, 0]) Coordinates: * lat (lat) int64 87 88 89 90
See also
DataArray.reindex_like
,align
- reindex_like(other: T_DataArrayOrSet, *, method: ReindexMethodOptions = None, tolerance: int | float | Iterable[int | float] | None = None, copy: bool = True, fill_value=<NA>) Self #
Conform this object onto the indexes of another object, for indexes which the objects share. Missing values are filled with
fill_value
. The default fill value is NaN.- Parameters
other (Dataset or DataArray) – Object with an ‘indexes’ attribute giving a mapping from dimension names to pandas.Index objects, which provides coordinates upon which to index the variables in this dataset. The indexes on this other object need not be the same as the indexes on this dataset. Any mis-matched index values will be filled in with NaN, and any mis-matched dimension names will simply be ignored.
method ({None, "nearest", "pad", "ffill", "backfill", "bfill"}, optional) –
Method to use for filling index values from other not found on this data array:
None (default): don’t fill gaps
pad / ffill: propagate last valid index value forward
backfill / bfill: propagate next valid index value backward
nearest: use nearest valid index value
tolerance (optional) – Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation
abs(index[indexer] - target) <= tolerance
. Tolerance may be a scalar value, which applies the same tolerance to all values, or list-like, which applies variable tolerance per element. List-like must be the same size as the index and its dtype must exactly match the index’s type.copy (bool, default: True) – If
copy=True
, data in the return value is always copied. Ifcopy=False
and reindexing is unnecessary, or can be performed with only slice operations, then the output may share memory with the input. In either case, a new xarray object is always returned.fill_value (scalar or dict-like, optional) – Value to use for newly missing values. If a dict-like, maps variable names (including coordinates) to fill values. Use this data array’s name to refer to the data array’s values.
- Returns
reindexed – Another dataset array, with this array’s data but coordinates from the other object.
- Return type
DataArray
Examples
>>> data = np.arange(12).reshape(4, 3) >>> da1 = xr.DataArray( ... data=data, ... dims=["x", "y"], ... coords={"x": [10, 20, 30, 40], "y": [70, 80, 90]}, ... ) >>> da1 <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) int64 10 20 30 40 * y (y) int64 70 80 90 >>> da2 = xr.DataArray( ... data=data, ... dims=["x", "y"], ... coords={"x": [40, 30, 20, 10], "y": [90, 80, 70]}, ... ) >>> da2 <xarray.DataArray (x: 4, y: 3)> array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]]) Coordinates: * x (x) int64 40 30 20 10 * y (y) int64 90 80 70
Reindexing with both DataArrays having the same coordinates set, but in different order:
>>> da1.reindex_like(da2) <xarray.DataArray (x: 4, y: 3)> array([[11, 10, 9], [ 8, 7, 6], [ 5, 4, 3], [ 2, 1, 0]]) Coordinates: * x (x) int64 40 30 20 10 * y (y) int64 90 80 70
Reindexing with the other array having additional coordinates:
>>> da3 = xr.DataArray( ... data=data, ... dims=["x", "y"], ... coords={"x": [20, 10, 29, 39], "y": [70, 80, 90]}, ... ) >>> da1.reindex_like(da3) <xarray.DataArray (x: 4, y: 3)> array([[ 3., 4., 5.], [ 0., 1., 2.], [nan, nan, nan], [nan, nan, nan]]) Coordinates: * x (x) int64 20 10 29 39 * y (y) int64 70 80 90
Filling missing values with the previous valid index with respect to the coordinates’ value:
>>> da1.reindex_like(da3, method="ffill") <xarray.DataArray (x: 4, y: 3)> array([[3, 4, 5], [0, 1, 2], [3, 4, 5], [6, 7, 8]]) Coordinates: * x (x) int64 20 10 29 39 * y (y) int64 70 80 90
Filling missing values while tolerating specified error for inexact matches:
>>> da1.reindex_like(da3, method="ffill", tolerance=5) <xarray.DataArray (x: 4, y: 3)> array([[ 3., 4., 5.], [ 0., 1., 2.], [nan, nan, nan], [nan, nan, nan]]) Coordinates: * x (x) int64 20 10 29 39 * y (y) int64 70 80 90
Filling missing values with manually specified values:
>>> da1.reindex_like(da3, fill_value=19) <xarray.DataArray (x: 4, y: 3)> array([[ 3, 4, 5], [ 0, 1, 2], [19, 19, 19], [19, 19, 19]]) Coordinates: * x (x) int64 20 10 29 39 * y (y) int64 70 80 90
Note that unlike
broadcast_like
,reindex_like
doesn’t create new dimensions:>>> da1.sel(x=20) <xarray.DataArray (y: 3)> array([3, 4, 5]) Coordinates: x int64 20 * y (y) int64 70 80 90
…so
b
in not added here:>>> da1.sel(x=20).reindex_like(da1) <xarray.DataArray (y: 3)> array([3, 4, 5]) Coordinates: x int64 20 * y (y) int64 70 80 90
See also
DataArray.reindex
,DataArray.broadcast_like
,align
- rename(new_name_or_name_dict: Hashable | Mapping[Any, Hashable] | None = None, **names: Hashable) Self #
Returns a new DataArray with renamed coordinates, dimensions or a new name.
- Parameters
new_name_or_name_dict (str or dict-like, optional) – If the argument is dict-like, it used as a mapping from old names to new names for coordinates or dimensions. Otherwise, use the argument as the new name for this array.
**names (Hashable, optional) – The keyword arguments form of a mapping from old names to new names for coordinates or dimensions. One of new_name_or_name_dict or names must be provided.
- Returns
renamed – Renamed array or array with renamed coordinates.
- Return type
DataArray
See also
Dataset.rename
,DataArray.swap_dims
- reorder_levels(dim_order: Mapping[Any, Sequence[int | Hashable]] | None = None, **dim_order_kwargs: Sequence[int | Hashable]) Self #
Rearrange index levels using input order.
- Parameters
Hashable (dim_order dict-like of Hashable to int or) – Mapping from names matching dimensions and values given by lists representing new level orders. Every given dimension must have a multi-index.
**dim_order_kwargs (optional) – The keyword arguments form of
dim_order
. One of dim_order or dim_order_kwargs must be provided.
- Returns
obj – Another dataarray, with this dataarray’s data but replaced coordinates.
- Return type
DataArray
- resample(indexer: Mapping[Any, str] | None = None, skipna: bool | None = None, closed: SideOptions | None = None, label: SideOptions | None = None, base: int | None = None, offset: pd.Timedelta | datetime.timedelta | str | None = None, origin: str | DatetimeLike = 'start_day', loffset: datetime.timedelta | str | None = None, restore_coord_dims: bool | None = None, **indexer_kwargs: str) DataArrayResample #
Returns a Resample object for performing resampling operations.
Handles both downsampling and upsampling. The resampled dimension must be a datetime-like coordinate. If any intervals contain no values from the original object, they will be given the value
NaN
.- Parameters
indexer (Mapping of Hashable to str, optional) – Mapping from the dimension name to resample frequency [1]_. The dimension must be datetime-like.
skipna (bool, optional) – Whether to skip missing values when aggregating in downsampling.
closed ({"left", "right"}, optional) – Side of each interval to treat as closed.
label ({"left", "right"}, optional) – Side of each interval to use for labeling.
base (int, optional) – For frequencies that evenly subdivide 1 day, the “origin” of the aggregated intervals. For example, for “24H” frequency, base could range from 0 through 23.
origin ({'epoch', 'start', 'start_day', 'end', 'end_day'}, pd.Timestamp, datetime.datetime, np.datetime64, or cftime.datetime, default 'start_day') –
The datetime on which to adjust the grouping. The timezone of origin must match the timezone of the index.
If a datetime is not used, these values are also supported: - ‘epoch’: origin is 1970-01-01 - ‘start’: origin is the first value of the timeseries - ‘start_day’: origin is the first day at midnight of the timeseries - ‘end’: origin is the last value of the timeseries - ‘end_day’: origin is the ceiling midnight of the last day
offset (pd.Timedelta, datetime.timedelta, or str, default is None) – An offset timedelta added to the origin.
loffset (timedelta or str, optional) –
Offset used to adjust the resampled time labels. Some pandas date offset strings are supported.
Deprecated since version 2023.03.0: Following pandas, the
loffset
parameter is deprecated in favor of using time offset arithmetic, and will be removed in a future version of xarray.restore_coord_dims (bool, optional) – If True, also restore the dimension order of multi-dimensional coordinates.
**indexer_kwargs (str) – The keyword arguments form of
indexer
. One of indexer or indexer_kwargs must be provided.
- Returns
resampled – This object resampled.
- Return type
core.resample.DataArrayResample
Examples
Downsample monthly time-series data to seasonal data:
>>> da = xr.DataArray( ... np.linspace(0, 11, num=12), ... coords=[ ... pd.date_range( ... "1999-12-15", ... periods=12, ... freq=pd.DateOffset(months=1), ... ) ... ], ... dims="time", ... ) >>> da <xarray.DataArray (time: 12)> array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.]) Coordinates: * time (time) datetime64[ns] 1999-12-15 2000-01-15 ... 2000-11-15 >>> da.resample(time="QS-DEC").mean() <xarray.DataArray (time: 4)> array([ 1., 4., 7., 10.]) Coordinates: * time (time) datetime64[ns] 1999-12-01 2000-03-01 2000-06-01 2000-09-01
Upsample monthly time-series data to daily data:
>>> da.resample(time="1D").interpolate("linear") # +doctest: ELLIPSIS <xarray.DataArray (time: 337)> array([ 0. , 0.03225806, 0.06451613, 0.09677419, 0.12903226, 0.16129032, 0.19354839, 0.22580645, 0.25806452, 0.29032258, 0.32258065, 0.35483871, 0.38709677, 0.41935484, 0.4516129 , ... 10.80645161, 10.83870968, 10.87096774, 10.90322581, 10.93548387, 10.96774194, 11. ]) Coordinates: * time (time) datetime64[ns] 1999-12-15 1999-12-16 ... 2000-11-15
Limit scope of upsampling method
>>> da.resample(time="1D").nearest(tolerance="1D") <xarray.DataArray (time: 337)> array([ 0., 0., nan, ..., nan, 11., 11.]) Coordinates: * time (time) datetime64[ns] 1999-12-15 1999-12-16 ... 2000-11-15
See also
Dataset.resample
,pandas.Series.resample
,pandas.DataFrame.resample
References
- reset_coords(names: Dims = None, *, drop: bool = False) Self | Dataset #
Given names of coordinates, reset them to become variables.
- Parameters
names (str, Iterable of Hashable or None, optional) – Name(s) of non-index coordinates in this dataset to reset into variables. By default, all non-index coordinates are reset.
drop (bool, default: False) – If True, remove coordinates instead of converting them into variables.
- Return type
Dataset, or DataArray if
drop == True
Examples
>>> temperature = np.arange(25).reshape(5, 5) >>> pressure = np.arange(50, 75).reshape(5, 5) >>> da = xr.DataArray( ... data=temperature, ... dims=["x", "y"], ... coords=dict( ... lon=("x", np.arange(10, 15)), ... lat=("y", np.arange(20, 25)), ... Pressure=(["x", "y"], pressure), ... ), ... name="Temperature", ... ) >>> da <xarray.DataArray 'Temperature' (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: lon (x) int64 10 11 12 13 14 lat (y) int64 20 21 22 23 24 Pressure (x, y) int64 50 51 52 53 54 55 56 57 ... 67 68 69 70 71 72 73 74 Dimensions without coordinates: x, y
Return Dataset with target coordinate as a data variable rather than a coordinate variable:
>>> da.reset_coords(names="Pressure") <xarray.Dataset> Dimensions: (x: 5, y: 5) Coordinates: lon (x) int64 10 11 12 13 14 lat (y) int64 20 21 22 23 24 Dimensions without coordinates: x, y Data variables: Pressure (x, y) int64 50 51 52 53 54 55 56 57 ... 68 69 70 71 72 73 74 Temperature (x, y) int64 0 1 2 3 4 5 6 7 8 9 ... 16 17 18 19 20 21 22 23 24
Return DataArray without targeted coordinate:
>>> da.reset_coords(names="Pressure", drop=True) <xarray.DataArray 'Temperature' (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: lon (x) int64 10 11 12 13 14 lat (y) int64 20 21 22 23 24 Dimensions without coordinates: x, y
- reset_index(dims_or_levels: Hashable | Sequence[Hashable], drop: bool = False) Self #
Reset the specified index(es) or multi-index level(s).
This legacy method is specific to pandas (multi-)indexes and 1-dimensional “dimension” coordinates. See the more generic
drop_indexes()
andset_xindex()
method to respectively drop and set pandas or custom indexes for arbitrary coordinates.- Parameters
dims_or_levels (Hashable or sequence of Hashable) – Name(s) of the dimension(s) and/or multi-index level(s) that will be reset.
drop (bool, default: False) – If True, remove the specified indexes and/or multi-index levels instead of extracting them as new coordinates (default: False).
- Returns
obj – Another dataarray, with this dataarray’s data but replaced coordinates.
- Return type
DataArray
See also
DataArray.set_index
,DataArray.set_xindex
,DataArray.drop_indexes
- roll(shifts: Mapping[Hashable, int] | None = None, roll_coords: bool = False, **shifts_kwargs: int) Self #
Roll this array by an offset along one or more dimensions.
Unlike shift, roll treats the given dimensions as periodic, so will not create any missing values to be filled.
Unlike shift, roll may rotate all variables, including coordinates if specified. The direction of rotation is consistent with
numpy.roll()
.- Parameters
shifts (mapping of Hashable to int, optional) – Integer offset to rotate each of the given dimensions. Positive offsets roll to the right; negative offsets roll to the left.
roll_coords (bool, default: False) – Indicates whether to roll the coordinates by the offset too.
**shifts_kwargs ({dim: offset, ...}, optional) – The keyword arguments form of
shifts
. One of shifts or shifts_kwargs must be provided.
- Returns
rolled – DataArray with the same attributes but rolled data and coordinates.
- Return type
DataArray
See also
Examples
>>> arr = xr.DataArray([5, 6, 7], dims="x") >>> arr.roll(x=1) <xarray.DataArray (x: 3)> array([7, 5, 6]) Dimensions without coordinates: x
- rolling(dim: Mapping[Any, int] | None = None, min_periods: int | None = None, center: bool | Mapping[Any, bool] = False, **window_kwargs: int) DataArrayRolling #
Rolling window object for DataArrays.
- Parameters
dim (dict, optional) – Mapping from the dimension name to create the rolling iterator along (e.g. time) to its moving window size.
min_periods (int or None, default: None) – Minimum number of observations in window required to have a value (otherwise result is NA). The default, None, is equivalent to setting min_periods equal to the size of the window.
center (bool or Mapping to int, default: False) – Set the labels at the center of the window.
**window_kwargs (optional) – The keyword arguments form of
dim
. One of dim or window_kwargs must be provided.
- Return type
core.rolling.DataArrayRolling
Examples
Create rolling seasonal average of monthly data e.g. DJF, JFM, …, SON:
>>> da = xr.DataArray( ... np.linspace(0, 11, num=12), ... coords=[ ... pd.date_range( ... "1999-12-15", ... periods=12, ... freq=pd.DateOffset(months=1), ... ) ... ], ... dims="time", ... ) >>> da <xarray.DataArray (time: 12)> array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11.]) Coordinates: * time (time) datetime64[ns] 1999-12-15 2000-01-15 ... 2000-11-15 >>> da.rolling(time=3, center=True).mean() <xarray.DataArray (time: 12)> array([nan, 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., nan]) Coordinates: * time (time) datetime64[ns] 1999-12-15 2000-01-15 ... 2000-11-15
Remove the NaNs using
dropna()
:>>> da.rolling(time=3, center=True).mean().dropna("time") <xarray.DataArray (time: 10)> array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.]) Coordinates: * time (time) datetime64[ns] 2000-01-15 2000-02-15 ... 2000-10-15
See also
core.rolling.DataArrayRolling
,Dataset.rolling
- rolling_exp(window: Mapping[Any, int] | None = None, window_type: str = 'span', **window_kwargs) RollingExp[T_DataWithCoords] #
Exponentially-weighted moving window. Similar to EWM in pandas
Requires the optional Numbagg dependency.
- Parameters
window (mapping of hashable to int, optional) – A mapping from the name of the dimension to create the rolling exponential window along (e.g. time) to the size of the moving window.
window_type ({"span", "com", "halflife", "alpha"}, default: "span") – The format of the previously supplied window. Each is a simple numerical transformation of the others. Described in detail: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ewm.html
**window_kwargs (optional) – The keyword arguments form of
window
. One of window or window_kwargs must be provided.
See also
core.rolling_exp.RollingExp
- round(*args: Any, **kwargs: Any) typing_extensions.Self #
Round an array to the given number of decimals.
around is an alias of ~numpy.round.
- searchsorted(v, side='left', sorter=None)#
Find indices where elements of v should be inserted in a to maintain order.
For full documentation, see numpy.searchsorted
See also
numpy.searchsorted
equivalent function
- sel(indexers: Mapping[Any, Any] | None = None, method: str | None = None, tolerance=None, drop: bool = False, **indexers_kwargs: Any) Self #
Return a new DataArray whose data is given by selecting index labels along the specified dimension(s).
In contrast to DataArray.isel, indexers for this method should use labels instead of integers.
Under the hood, this method is powered by using pandas’s powerful Index objects. This makes label based indexing essentially just as fast as using integer indexing.
It also means this method uses pandas’s (well documented) logic for indexing. This means you can use string shortcuts for datetime indexes (e.g., ‘2000-01’ to select all values in January 2000). It also means that slices are treated as inclusive of both the start and stop values, unlike normal Python indexing.
Warning
Do not try to assign values when using any of the indexing methods
isel
orsel
:da = xr.DataArray([0, 1, 2, 3], dims=['x']) # DO NOT do this da.isel(x=[0, 1, 2])[1] = -1
Assigning values with the chained indexing using
.sel
or.isel
fails silently.- Parameters
indexers (dict, optional) – A dict with keys matching dimensions and values given by scalars, slices or arrays of tick labels. For dimensions with multi-index, the indexer may also be a dict-like object with keys matching index level names. If DataArrays are passed as indexers, xarray-style indexing will be carried out. See indexing for the details. One of indexers or indexers_kwargs must be provided.
method ({None, "nearest", "pad", "ffill", "backfill", "bfill"}, optional) –
Method to use for inexact matches:
None (default): only exact matches
pad / ffill: propagate last valid index value forward
backfill / bfill: propagate next valid index value backward
nearest: use nearest valid index value
tolerance (optional) – Maximum distance between original and new labels for inexact matches. The values of the index at the matching locations must satisfy the equation
abs(index[indexer] - target) <= tolerance
.drop (bool, optional) – If
drop=True
, drop coordinates variables in indexers instead of making them scalar.**indexers_kwargs ({dim: indexer, ...}, optional) – The keyword arguments form of
indexers
. One of indexers or indexers_kwargs must be provided.
- Returns
obj – A new DataArray with the same contents as this DataArray, except the data and each dimension is indexed by the appropriate indexers. If indexer DataArrays have coordinates that do not conflict with this object, then these coordinates will be attached. In general, each array’s data will be a view of the array’s data in this DataArray, unless vectorized indexing was triggered by using an array indexer, in which case the data will be a copy.
- Return type
DataArray
See also
Dataset.sel DataArray.isel
- xarray-tutorial:intermediate/indexing/indexing
Tutorial material on indexing with Xarray objects
- xarray-tutorial:fundamentals/02.1_indexing_Basic
Tutorial material on basics of indexing
Examples
>>> da = xr.DataArray( ... np.arange(25).reshape(5, 5), ... coords={"x": np.arange(5), "y": np.arange(5)}, ... dims=("x", "y"), ... ) >>> da <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Coordinates: * x (x) int64 0 1 2 3 4 * y (y) int64 0 1 2 3 4
>>> tgt_x = xr.DataArray(np.linspace(0, 4, num=5), dims="points") >>> tgt_y = xr.DataArray(np.linspace(0, 4, num=5), dims="points") >>> da = da.sel(x=tgt_x, y=tgt_y, method="nearest") >>> da <xarray.DataArray (points: 5)> array([ 0, 6, 12, 18, 24]) Coordinates: x (points) int64 0 1 2 3 4 y (points) int64 0 1 2 3 4 Dimensions without coordinates: points
- sel_inside(bounds: Tuple[Tuple[float, float, float], Tuple[float, float, float]]) tidy3d.components.data.data_array.SpatialDataArray #
Return a new SpatialDataArray that contains the minimal amount data necessary to cover a spatial region defined by
bounds
.- Parameters
bounds (Tuple[float, float, float], Tuple[float, float float]) – Min and max bounds packaged as
(minx, miny, minz), (maxx, maxy, maxz)
.- Returns
Extracted spatial data array.
- Return type
- set_close(close: Optional[Callable[[], None]]) None #
Register the function that releases any resources linked to this object.
This method controls how xarray cleans up resources associated with this object when the
.close()
method is called. It is mostly intended for backend developers and it is rarely needed by regular end-users.- Parameters
close (callable) – The function that when called like
close()
releases any resources linked to this object.
- set_index(indexes: Mapping[Any, Hashable | Sequence[Hashable]] | None = None, append: bool = False, **indexes_kwargs: Hashable | Sequence[Hashable]) Self #
Set DataArray (multi-)indexes using one or more existing coordinates.
This legacy method is limited to pandas (multi-)indexes and 1-dimensional “dimension” coordinates. See
set_xindex()
for setting a pandas or a custom Xarray-compatible index from one or more arbitrary coordinates.- Parameters
indexes ({dim: index, ...}) – Mapping from names matching dimensions and values given by (lists of) the names of existing coordinates or variables to set as new (multi-)index.
append (bool, default: False) – If True, append the supplied index(es) to the existing index(es). Otherwise replace the existing index(es).
**indexes_kwargs (optional) – The keyword arguments form of
indexes
. One of indexes or indexes_kwargs must be provided.
- Returns
obj – Another DataArray, with this data but replaced coordinates.
- Return type
DataArray
Examples
>>> arr = xr.DataArray( ... data=np.ones((2, 3)), ... dims=["x", "y"], ... coords={"x": range(2), "y": range(3), "a": ("x", [3, 4])}, ... ) >>> arr <xarray.DataArray (x: 2, y: 3)> array([[1., 1., 1.], [1., 1., 1.]]) Coordinates: * x (x) int64 0 1 * y (y) int64 0 1 2 a (x) int64 3 4 >>> arr.set_index(x="a") <xarray.DataArray (x: 2, y: 3)> array([[1., 1., 1.], [1., 1., 1.]]) Coordinates: * x (x) int64 3 4 * y (y) int64 0 1 2
See also
DataArray.reset_index
,DataArray.set_xindex
- set_xindex(coord_names: str | Sequence[Hashable], index_cls: type[Index] | None = None, **options) Self #
Set a new, Xarray-compatible index from one or more existing coordinate(s).
- Parameters
coord_names (str or list) – Name(s) of the coordinate(s) used to build the index. If several names are given, their order matters.
index_cls (subclass of
Index
) – The type of index to create. By default, try setting a pandas (multi-)index from the supplied coordinates.**options – Options passed to the index constructor.
- Returns
obj – Another dataarray, with this dataarray’s data and with a new index.
- Return type
DataArray
- property shape: tuple[int, ...]#
Tuple of array dimensions.
See also
numpy.ndarray.shape
- shift(shifts: Mapping[Any, int] | None = None, fill_value: Any = <NA>, **shifts_kwargs: int) Self #
Shift this DataArray by an offset along one or more dimensions.
Only the data is moved; coordinates stay in place. This is consistent with the behavior of
shift
in pandas.Values shifted from beyond array bounds will appear at one end of each dimension, which are filled according to fill_value. For periodic offsets instead see roll.
- Parameters
shifts (mapping of Hashable to int or None, optional) – Integer offset to shift along each of the given dimensions. Positive offsets shift to the right; negative offsets shift to the left.
fill_value (scalar, optional) – Value to use for newly missing values
**shifts_kwargs – The keyword arguments form of
shifts
. One of shifts or shifts_kwargs must be provided.
- Returns
shifted – DataArray with the same coordinates and attributes but shifted data.
- Return type
DataArray
See also
Examples
>>> arr = xr.DataArray([5, 6, 7], dims="x") >>> arr.shift(x=1) <xarray.DataArray (x: 3)> array([nan, 5., 6.]) Dimensions without coordinates: x
- property size: int#
Number of elements in the array.
Equal to
np.prod(a.shape)
, i.e., the product of the array’s dimensions.See also
numpy.ndarray.size
- property sizes: collections.abc.Mapping[collections.abc.Hashable, int]#
Ordered mapping from dimension names to lengths.
Immutable.
See also
Dataset.sizes
- sortby(variables: Hashable | DataArray | Sequence[Hashable | DataArray] | Callable[[Self], Hashable | DataArray | Sequence[Hashable | DataArray]], ascending: bool = True) Self #
Sort object by labels or values (along an axis).
Sorts the dataarray, either along specified dimensions, or according to values of 1-D dataarrays that share dimension with calling object.
If the input variables are dataarrays, then the dataarrays are aligned (via left-join) to the calling object prior to sorting by cell values. NaNs are sorted to the end, following Numpy convention.
If multiple sorts along the same dimension is given, numpy’s lexsort is performed along that dimension: https://numpy.org/doc/stable/reference/generated/numpy.lexsort.html and the FIRST key in the sequence is used as the primary sort key, followed by the 2nd key, etc.
- Parameters
variables (Hashable, DataArray, sequence of Hashable or DataArray, or Callable) – 1D DataArray objects or name(s) of 1D variable(s) in coords whose values are used to sort this array. If a callable, the callable is passed this object, and the result is used as the value for cond.
ascending (bool, default: True) – Whether to sort by ascending or descending order.
- Returns
sorted – A new dataarray where all the specified dims are sorted by dim labels.
- Return type
DataArray
See also
Dataset.sortby
,numpy.sort
,pandas.sort_values
,pandas.sort_index
Examples
>>> da = xr.DataArray( ... np.arange(5, 0, -1), ... coords=[pd.date_range("1/1/2000", periods=5)], ... dims="time", ... ) >>> da <xarray.DataArray (time: 5)> array([5, 4, 3, 2, 1]) Coordinates: * time (time) datetime64[ns] 2000-01-01 2000-01-02 ... 2000-01-05
>>> da.sortby(da) <xarray.DataArray (time: 5)> array([1, 2, 3, 4, 5]) Coordinates: * time (time) datetime64[ns] 2000-01-05 2000-01-04 ... 2000-01-01
>>> da.sortby(lambda x: x) <xarray.DataArray (time: 5)> array([1, 2, 3, 4, 5]) Coordinates: * time (time) datetime64[ns] 2000-01-05 2000-01-04 ... 2000-01-01
- squeeze(dim: Hashable | Iterable[Hashable] | None = None, drop: bool = False, axis: int | Iterable[int] | None = None) Self #
Return a new object with squeezed data.
- Parameters
dim (None or Hashable or iterable of Hashable, optional) – Selects a subset of the length one dimensions. If a dimension is selected with length greater than one, an error is raised. If None, all length one dimensions are squeezed.
drop (bool, default: False) – If
drop=True
, drop squeezed coordinates instead of making them scalar.axis (None or int or iterable of int, optional) – Like dim, but positional.
- Returns
squeezed – This object, but with with all or a subset of the dimensions of length 1 removed.
- Return type
same type as caller
See also
numpy.squeeze
- stack(dimensions: Mapping[Any, Sequence[Hashable]] | None = None, create_index: bool | None = True, index_cls: type[Index] = <class 'xarray.core.indexes.PandasMultiIndex'>, **dimensions_kwargs: Sequence[Hashable]) Self #
Stack any number of existing dimensions into a single new dimension.
New dimensions will be added at the end, and the corresponding coordinate variables will be combined into a MultiIndex.
- Parameters
dimensions (mapping of Hashable to sequence of Hashable) – Mapping of the form new_name=(dim1, dim2, …). Names of new dimensions, and the existing dimensions that they replace. An ellipsis (…) will be replaced by all unlisted dimensions. Passing a list containing an ellipsis (stacked_dim=[…]) will stack over all dimensions.
create_index (bool or None, default: True) – If True, create a multi-index for each of the stacked dimensions. If False, don’t create any index. If None, create a multi-index only if exactly one single (1-d) coordinate index is found for every dimension to stack.
index_cls (class, optional) – Can be used to pass a custom multi-index type. Must be an Xarray index that implements .stack(). By default, a pandas multi-index wrapper is used.
**dimensions_kwargs – The keyword arguments form of
dimensions
. One of dimensions or dimensions_kwargs must be provided.
- Returns
stacked – DataArray with stacked data.
- Return type
DataArray
Examples
>>> arr = xr.DataArray( ... np.arange(6).reshape(2, 3), ... coords=[("x", ["a", "b"]), ("y", [0, 1, 2])], ... ) >>> arr <xarray.DataArray (x: 2, y: 3)> array([[0, 1, 2], [3, 4, 5]]) Coordinates: * x (x) <U1 'a' 'b' * y (y) int64 0 1 2 >>> stacked = arr.stack(z=("x", "y")) >>> stacked.indexes["z"] MultiIndex([('a', 0), ('a', 1), ('a', 2), ('b', 0), ('b', 1), ('b', 2)], name='z')
See also
DataArray.unstack
- std(dim: Dims = None, *, skipna: bool | None = None, ddof: int = 0, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
std
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
std
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).ddof (int, default: 0) – “Delta Degrees of Freedom”: the divisor used in the calculation is
N - ddof
, whereN
represents the number of elements.keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
std
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
std
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.std
,dask.array.std
,Dataset.std
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.std() <xarray.DataArray ()> array(1.0198039)
Use
skipna
to control whether NaNs are ignored.>>> da.std(skipna=False) <xarray.DataArray ()> array(nan)
Specify
ddof=1
for an unbiased estimate.>>> da.std(skipna=True, ddof=1) <xarray.DataArray ()> array(1.14017543)
- sum(dim: Dims = None, *, skipna: bool | None = None, min_count: int | None = None, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
sum
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
sum
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).min_count (int or None, optional) – The required number of valid values to perform the operation. If fewer than min_count non-NA values are present the result will be NA. Only used if skipna is set to True or defaults to True for the array’s dtype. Changed in version 0.17.0: if specified on an integer array and skipna=True, the result will be a float array.
keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
sum
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
sum
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.sum
,dask.array.sum
,Dataset.sum
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.sum() <xarray.DataArray ()> array(8.)
Use
skipna
to control whether NaNs are ignored.>>> da.sum(skipna=False) <xarray.DataArray ()> array(nan)
Specify
min_count
for finer control over when NaNs are ignored.>>> da.sum(skipna=True, min_count=2) <xarray.DataArray ()> array(8.)
- swap_dims(dims_dict: Mapping[Any, Hashable] | None = None, **dims_kwargs) Self #
Returns a new DataArray with swapped dimensions.
- Parameters
dims_dict (dict-like) – Dictionary whose keys are current dimension names and whose values are new names.
**dims_kwargs ({existing_dim: new_dim, ...}, optional) – The keyword arguments form of
dims_dict
. One of dims_dict or dims_kwargs must be provided.
- Returns
swapped – DataArray with swapped dimensions.
- Return type
DataArray
Examples
>>> arr = xr.DataArray( ... data=[0, 1], ... dims="x", ... coords={"x": ["a", "b"], "y": ("x", [0, 1])}, ... ) >>> arr <xarray.DataArray (x: 2)> array([0, 1]) Coordinates: * x (x) <U1 'a' 'b' y (x) int64 0 1
>>> arr.swap_dims({"x": "y"}) <xarray.DataArray (y: 2)> array([0, 1]) Coordinates: x (y) <U1 'a' 'b' * y (y) int64 0 1
>>> arr.swap_dims({"x": "z"}) <xarray.DataArray (z: 2)> array([0, 1]) Coordinates: x (z) <U1 'a' 'b' y (z) int64 0 1 Dimensions without coordinates: z
See also
DataArray.rename
,Dataset.swap_dims
- tail(indexers: Mapping[Any, int] | int | None = None, **indexers_kwargs: Any) Self #
Return a new DataArray whose data is given by the the last n values along the specified dimension(s). Default n = 5
See also
Dataset.tail
,DataArray.head
,DataArray.thin
Examples
>>> da = xr.DataArray( ... np.arange(25).reshape(5, 5), ... dims=("x", "y"), ... ) >>> da <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Dimensions without coordinates: x, y
>>> da.tail(y=1) <xarray.DataArray (x: 5, y: 1)> array([[ 4], [ 9], [14], [19], [24]]) Dimensions without coordinates: x, y
>>> da.tail({"x": 2, "y": 2}) <xarray.DataArray (x: 2, y: 2)> array([[18, 19], [23, 24]]) Dimensions without coordinates: x, y
- thin(indexers: Mapping[Any, int] | int | None = None, **indexers_kwargs: Any) Self #
Return a new DataArray whose data is given by each n value along the specified dimension(s).
Examples
>>> x_arr = np.arange(0, 26) >>> x_arr array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]) >>> x = xr.DataArray( ... np.reshape(x_arr, (2, 13)), ... dims=("x", "y"), ... coords={"x": [0, 1], "y": np.arange(0, 13)}, ... ) >>> x <xarray.DataArray (x: 2, y: 13)> array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]]) Coordinates: * x (x) int64 0 1 * y (y) int64 0 1 2 3 4 5 6 7 8 9 10 11 12
>>> >>> x.thin(3) <xarray.DataArray (x: 1, y: 5)> array([[ 0, 3, 6, 9, 12]]) Coordinates: * x (x) int64 0 * y (y) int64 0 3 6 9 12 >>> x.thin({"x": 2, "y": 5}) <xarray.DataArray (x: 1, y: 3)> array([[ 0, 5, 10]]) Coordinates: * x (x) int64 0 * y (y) int64 0 5 10
See also
Dataset.thin
,DataArray.head
,DataArray.tail
- to_dask_dataframe(dim_order: Sequence[Hashable] | None = None, set_index: bool = False) DaskDataFrame #
Convert this array into a dask.dataframe.DataFrame.
- Parameters
dim_order (Sequence of Hashable or None , optional) – Hierarchical dimension order for the resulting dataframe. Array content is transposed to this order and then written out as flat vectors in contiguous order, so the last dimension in this list will be contiguous in the resulting DataFrame. This has a major influence on which operations are efficient on the resulting dask dataframe.
set_index (bool, default: False) – If set_index=True, the dask DataFrame is indexed by this dataset’s coordinate. Since dask DataFrames do not support multi-indexes, set_index only works if the dataset only contains one dimension.
- Return type
dask.dataframe.DataFrame
Examples
>>> da = xr.DataArray( ... np.arange(4 * 2 * 2).reshape(4, 2, 2), ... dims=("time", "lat", "lon"), ... coords={ ... "time": np.arange(4), ... "lat": [-30, -20], ... "lon": [120, 130], ... }, ... name="eg_dataarray", ... attrs={"units": "Celsius", "description": "Random temperature data"}, ... ) >>> da.to_dask_dataframe(["lat", "lon", "time"]).compute() lat lon time eg_dataarray 0 -30 120 0 0 1 -30 120 1 4 2 -30 120 2 8 3 -30 120 3 12 4 -30 130 0 1 5 -30 130 1 5 6 -30 130 2 9 7 -30 130 3 13 8 -20 120 0 2 9 -20 120 1 6 10 -20 120 2 10 11 -20 120 3 14 12 -20 130 0 3 13 -20 130 1 7 14 -20 130 2 11 15 -20 130 3 15
- to_dataframe(name: Optional[collections.abc.Hashable] = None, dim_order: Optional[collections.abc.Sequence[collections.abc.Hashable]] = None) pandas.core.frame.DataFrame #
Convert this array and its coordinates into a tidy pandas.DataFrame.
The DataFrame is indexed by the Cartesian product of index coordinates (in the form of a
pandas.MultiIndex
). Other coordinates are included as columns in the DataFrame.For 1D and 2D DataArrays, see also
DataArray.to_pandas()
which doesn’t rely on a MultiIndex to build the DataFrame.- Parameters
name (Hashable or None, optional) – Name to give to this array (required if unnamed).
dim_order (Sequence of Hashable or None, optional) –
Hierarchical dimension order for the resulting dataframe. Array content is transposed to this order and then written out as flat vectors in contiguous order, so the last dimension in this list will be contiguous in the resulting DataFrame. This has a major influence on which operations are efficient on the resulting dataframe.
If provided, must include all dimensions of this DataArray. By default, dimensions are sorted according to the DataArray dimensions order.
- Returns
result – DataArray as a pandas DataFrame.
- Return type
DataFrame
See also
DataArray.to_pandas
,DataArray.to_series
- to_dataset(dim: Optional[collections.abc.Hashable] = None, *, name: Optional[collections.abc.Hashable] = None, promote_attrs: bool = False) xarray.core.dataset.Dataset #
Convert a DataArray to a Dataset.
- Parameters
dim (Hashable, optional) – Name of the dimension on this array along which to split this array into separate variables. If not provided, this array is converted into a Dataset of one variable.
name (Hashable, optional) – Name to substitute for this array’s name. Only valid if
dim
is not provided.promote_attrs (bool, default: False) – Set to True to shallow copy attrs of DataArray to returned Dataset.
- Returns
dataset
- Return type
Dataset
- to_dict(data: Union[bool, Literal['list', 'array']] = 'list', encoding: bool = False) dict[str, Any] #
Convert this xarray.DataArray into a dictionary following xarray naming conventions.
Converts all variables and attributes to native Python objects. Useful for converting to json. To avoid datetime incompatibility use decode_times=False kwarg in xarray.open_dataset.
- Parameters
data (bool or {"list", "array"}, default: "list") – Whether to include the actual data in the dictionary. When set to False, returns just the schema. If set to “array”, returns data as underlying array type. If set to “list” (or True for backwards compatibility), returns data in lists of Python data types. Note that for obtaining the “list” output efficiently, use da.compute().to_dict(data=”list”).
encoding (bool, default: False) – Whether to include the Dataset’s encoding in the dictionary.
- Returns
dict
- Return type
dict
See also
DataArray.from_dict
,Dataset.to_dict
- to_hdf5(fname: str, group_path: str) None #
Save an xr.DataArray to the hdf5 file with a given path to the group.
- to_index() pandas.core.indexes.base.Index #
Convert this variable to a pandas.Index. Only possible for 1D arrays.
- to_iris() iris_Cube #
Convert this array into a iris.cube.Cube
- to_masked_array(copy: bool = True) numpy.ma.core.MaskedArray #
Convert this array into a numpy.ma.MaskedArray
- Parameters
copy (bool, default: True) – If True make a copy of the array in the result. If False, a MaskedArray view of DataArray.values is returned.
- Returns
result – Masked where invalid values (nan or inf) occur.
- Return type
MaskedArray
- to_netcdf(path: str | PathLike | None = None, mode: Literal['w', 'a'] = 'w', format: T_NetcdfTypes | None = None, group: str | None = None, engine: T_NetcdfEngine | None = None, encoding: Mapping[Hashable, Mapping[str, Any]] | None = None, unlimited_dims: Iterable[Hashable] | None = None, compute: bool = True, invalid_netcdf: bool = False) bytes | Delayed | None #
Write DataArray contents to a netCDF file.
- Parameters
path (str, path-like or None, optional) – Path to which to save this dataset. File-like objects are only supported by the scipy engine. If no path is provided, this function returns the resulting netCDF file as bytes; in this case, we need to use scipy, which does not support netCDF version 4 (the default format becomes NETCDF3_64BIT).
mode ({"w", "a"}, default: "w") – Write (‘w’) or append (‘a’) mode. If mode=’w’, any existing file at this location will be overwritten. If mode=’a’, existing variables will be overwritten.
format ({"NETCDF4", "NETCDF4_CLASSIC", "NETCDF3_64BIT", "NETCDF3_CLASSIC"}, optional) –
File format for the resulting netCDF file:
NETCDF4: Data is stored in an HDF5 file, using netCDF4 API features.
NETCDF4_CLASSIC: Data is stored in an HDF5 file, using only netCDF 3 compatible API features.
NETCDF3_64BIT: 64-bit offset version of the netCDF 3 file format, which fully supports 2+ GB files, but is only compatible with clients linked against netCDF version 3.6.0 or later.
NETCDF3_CLASSIC: The classic netCDF 3 file format. It does not handle 2+ GB files very well.
All formats are supported by the netCDF4-python library. scipy.io.netcdf only supports the last two formats.
The default format is NETCDF4 if you are saving a file to disk and have the netCDF4-python library available. Otherwise, xarray falls back to using scipy to write netCDF files and defaults to the NETCDF3_64BIT format (scipy does not support netCDF4).
group (str, optional) – Path to the netCDF4 group in the given file to open (only works for format=’NETCDF4’). The group(s) will be created if necessary.
engine ({"netcdf4", "scipy", "h5netcdf"}, optional) – Engine to use when writing netCDF files. If not provided, the default engine is chosen based on available dependencies, with a preference for ‘netcdf4’ if writing to a file on disk.
encoding (dict, optional) –
Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g.,
{"my_variable": {"dtype": "int16", "scale_factor": 0.1, "zlib": True}, ...}
The h5netcdf engine supports both the NetCDF4-style compression encoding parameters
{"zlib": True, "complevel": 9}
and the h5py ones{"compression": "gzip", "compression_opts": 9}
. This allows using any compression plugin installed in the HDF5 library, e.g. LZF.unlimited_dims (iterable of Hashable, optional) – Dimension(s) that should be serialized as unlimited dimensions. By default, no dimensions are treated as unlimited dimensions. Note that unlimited_dims may also be set via
dataset.encoding["unlimited_dims"]
.compute (bool, default: True) – If true compute immediately, otherwise return a
dask.delayed.Delayed
object that can be computed later.invalid_netcdf (bool, default: False) – Only valid along with
engine="h5netcdf"
. If True, allow writing hdf5 files which are invalid netcdf as described in https://github.com/h5netcdf/h5netcdf.
- Returns
store –
bytes
if path is Nonedask.delayed.Delayed
if compute is FalseNone otherwise
- Return type
bytes or Delayed or None
Notes
Only xarray.Dataset objects can be written to netCDF files, so the xarray.DataArray is converted to a xarray.Dataset object containing a single variable. If the DataArray has no name, or if the name is the same as a coordinate name, then it is given the name
"__xarray_dataarray_variable__"
.See also
Dataset.to_netcdf
- to_numpy() numpy.ndarray #
Coerces wrapped data to numpy and returns a numpy.ndarray.
See also
DataArray.as_numpy
Same but returns the surrounding DataArray instead.
Dataset.as_numpy
,DataArray.values
,DataArray.data
- to_pandas() Self | pd.Series | pd.DataFrame #
Convert this array into a pandas object with the same shape.
The type of the returned object depends on the number of DataArray dimensions:
0D -> xarray.DataArray
1D -> pandas.Series
2D -> pandas.DataFrame
Only works for arrays with 2 or fewer dimensions.
The DataArray constructor performs the inverse transformation.
- Returns
result – DataArray, pandas Series or pandas DataFrame.
- Return type
DataArray | Series | DataFrame
- to_series() pandas.core.series.Series #
Convert this array into a pandas.Series.
The Series is indexed by the Cartesian product of index coordinates (in the form of a
pandas.MultiIndex
).- Returns
result – DataArray as a pandas Series.
- Return type
Series
See also
DataArray.to_pandas
,DataArray.to_dataframe
- to_unstacked_dataset(dim: collections.abc.Hashable, level: int | collections.abc.Hashable = 0) xarray.core.dataset.Dataset #
Unstack DataArray expanding to Dataset along a given level of a stacked coordinate.
This is the inverse operation of Dataset.to_stacked_array.
- Parameters
dim (Hashable) – Name of existing dimension to unstack
level (int or Hashable, default: 0) – The MultiIndex level to expand to a dataset along. Can either be the integer index of the level or its name.
- Returns
unstacked
- Return type
Dataset
Examples
>>> arr = xr.DataArray( ... np.arange(6).reshape(2, 3), ... coords=[("x", ["a", "b"]), ("y", [0, 1, 2])], ... ) >>> data = xr.Dataset({"a": arr, "b": arr.isel(y=0)}) >>> data <xarray.Dataset> Dimensions: (x: 2, y: 3) Coordinates: * x (x) <U1 'a' 'b' * y (y) int64 0 1 2 Data variables: a (x, y) int64 0 1 2 3 4 5 b (x) int64 0 3 >>> stacked = data.to_stacked_array("z", ["x"]) >>> stacked.indexes["z"] MultiIndex([('a', 0), ('a', 1), ('a', 2), ('b', nan)], name='z') >>> roundtripped = stacked.to_unstacked_dataset(dim="z") >>> data.identical(roundtripped) True
See also
Dataset.to_stacked_array
- to_zarr(store: MutableMapping | str | PathLike[str] | None = None, chunk_store: MutableMapping | str | PathLike | None = None, mode: ZarrWriteModes | None = None, synchronizer=None, group: str | None = None, encoding: Mapping | None = None, *, compute: bool = True, consolidated: bool | None = None, append_dim: Hashable | None = None, region: Mapping[str, slice] | None = None, safe_chunks: bool = True, storage_options: dict[str, str] | None = None, zarr_version: int | None = None) ZarrStore | Delayed #
Write DataArray contents to a Zarr store
Zarr chunks are determined in the following way:
From the
chunks
attribute in each variable’sencoding
(can be set via DataArray.chunk).If the variable is a Dask array, from the dask chunks
If neither Dask chunks nor encoding chunks are present, chunks will be determined automatically by Zarr
If both Dask chunks and encoding chunks are present, encoding chunks will be used, provided that there is a many-to-one relationship between encoding chunks and dask chunks (i.e. Dask chunks are bigger than and evenly divide encoding chunks); otherwise raise a
ValueError
. This restriction ensures that no synchronization / locks are required when writing. To disable this restriction, usesafe_chunks=False
.
- Parameters
store (MutableMapping, str or path-like, optional) – Store or path to directory in local or remote file system.
chunk_store (MutableMapping, str or path-like, optional) – Store or path to directory in local or remote file system only for Zarr array chunks. Requires zarr-python v2.4.0 or later.
mode ({"w", "w-", "a", "a-", r+", None}, optional) – Persistence mode: “w” means create (overwrite if exists); “w-” means create (fail if exists); “a” means override all existing variables including dimension coordinates (create if does not exist); “a-” means only append those variables that have
append_dim
. “r+” means modify existing array values only (raise an error if any metadata or shapes would change). The default mode is “a” ifappend_dim
is set. Otherwise, it is “r+” ifregion
is set andw-
otherwise.synchronizer (object, optional) – Zarr array synchronizer.
group (str, optional) – Group path. (a.k.a. path in zarr terminology.)
encoding (dict, optional) – Nested dictionary with variable names as keys and dictionaries of variable specific encodings as values, e.g.,
{"my_variable": {"dtype": "int16", "scale_factor": 0.1,}, ...}
compute (bool, default: True) – If True write array data immediately, otherwise return a
dask.delayed.Delayed
object that can be computed to write array data later. Metadata is always updated eagerly.consolidated (bool, optional) –
If True, apply zarr’s consolidate_metadata function to the store after writing metadata and read existing stores with consolidated metadata; if False, do not. The default (consolidated=None) means write consolidated metadata and attempt to read consolidated metadata for existing stores (falling back to non-consolidated).
When the experimental
zarr_version=3
,consolidated
must be either beNone
orFalse
.append_dim (hashable, optional) – If set, the dimension along which the data will be appended. All other dimensions on overridden variables must remain the same size.
region (dict, optional) –
Optional mapping from dimension names to integer slices along dataarray dimensions to indicate the region of existing zarr array(s) in which to write this datarray’s data. For example,
{'x': slice(0, 1000), 'y': slice(10000, 11000)}
would indicate that values should be written to the region0:1000
alongx
and10000:11000
alongy
.Two restrictions apply to the use of
region
:If
region
is set, _all_ variables in a dataarray must have at least one dimension in common with the region. Other variables should be written in a separate call toto_zarr()
.Dimensions cannot be included in both
region
andappend_dim
at the same time. To create empty arrays to fill in withregion
, use a separate call toto_zarr()
withcompute=False
. See “Appending to existing Zarr stores” in the reference documentation for full details.
safe_chunks (bool, default: True) – If True, only allow writes to when there is a many-to-one relationship between Zarr chunks (specified in encoding) and Dask chunks. Set False to override this restriction; however, data may become corrupted if Zarr arrays are written in parallel. This option may be useful in combination with
compute=False
to initialize a Zarr store from an existing DataArray with arbitrary chunk structure.storage_options (dict, optional) – Any additional parameters for the storage backend (ignored for local paths).
zarr_version (int or None, optional) – The desired zarr spec version to target (currently 2 or 3). The default of None will attempt to determine the zarr version from
store
when possible, otherwise defaulting to 2.
- Returns
*
dask.delayed.Delayed
if compute is False* ZarrStore otherwise
References
Notes
- Zarr chunking behavior:
If chunks are found in the encoding argument or attribute corresponding to any DataArray, those chunks are used. If a DataArray is a dask array, it is written with those chunks. If not other chunks are found, Zarr uses its own heuristics to choose automatic chunk sizes.
- encoding:
The encoding attribute (if exists) of the DataArray(s) will be used. Override any existing encodings by providing the
encoding
kwarg.
See also
Dataset.to_zarr
- io.zarr
The I/O user guide, with more details and examples.
- transpose(*dims: Hashable, transpose_coords: bool = True, missing_dims: ErrorOptionsWithWarn = 'raise') Self #
Return a new DataArray object with transposed dimensions.
- Parameters
*dims (Hashable, optional) – By default, reverse the dimensions. Otherwise, reorder the dimensions to this order.
transpose_coords (bool, default: True) – If True, also transpose the coordinates of this DataArray.
missing_dims ({"raise", "warn", "ignore"}, default: "raise") – What to do if dimensions that should be selected from are not present in the DataArray: - “raise”: raise an exception - “warn”: raise a warning, and ignore the missing dimensions - “ignore”: ignore the missing dimensions
- Returns
transposed – The returned DataArray’s array is transposed.
- Return type
DataArray
Notes
This operation returns a view of this array’s data. It is lazy for dask-backed DataArrays but not for numpy-backed DataArrays – the data will be fully loaded.
See also
numpy.transpose
,Dataset.transpose
- unify_chunks() Self #
Unify chunk size along all chunked dimensions of this DataArray.
- Return type
DataArray with consistent chunk sizes for all dask-array variables
See also
dask.array.core.unify_chunks
- unstack(dim: Dims = None, *, fill_value: Any = <NA>, sparse: bool = False) Self #
Unstack existing dimensions corresponding to MultiIndexes into multiple new dimensions.
New dimensions will be added at the end.
- Parameters
dim (str, Iterable of Hashable or None, optional) – Dimension(s) over which to unstack. By default unstacks all MultiIndexes.
fill_value (scalar or dict-like, default: nan) – Value to be filled. If a dict-like, maps variable names to fill values. Use the data array’s name to refer to its name. If not provided or if the dict-like does not contain all variables, the dtype’s NA value will be used.
sparse (bool, default: False) – Use sparse-array if True
- Returns
unstacked – Array with unstacked data.
- Return type
DataArray
Examples
>>> arr = xr.DataArray( ... np.arange(6).reshape(2, 3), ... coords=[("x", ["a", "b"]), ("y", [0, 1, 2])], ... ) >>> arr <xarray.DataArray (x: 2, y: 3)> array([[0, 1, 2], [3, 4, 5]]) Coordinates: * x (x) <U1 'a' 'b' * y (y) int64 0 1 2 >>> stacked = arr.stack(z=("x", "y")) >>> stacked.indexes["z"] MultiIndex([('a', 0), ('a', 1), ('a', 2), ('b', 0), ('b', 1), ('b', 2)], name='z') >>> roundtripped = stacked.unstack() >>> arr.identical(roundtripped) True
See also
DataArray.stack
- classmethod validate_dims(val)#
Make sure the dims are the same as _dims, then put them in the correct order.
- property values: numpy.ndarray#
The array’s data as a numpy.ndarray.
If the array’s data is not a numpy.ndarray this will attempt to convert it naively using np.array(), which will raise an error if the array type does not support coercion like this (e.g. cupy).
- var(dim: Dims = None, *, skipna: bool | None = None, ddof: int = 0, keep_attrs: bool | None = None, **kwargs: Any) Self #
Reduce this DataArray’s data by applying
var
along some dimension(s).- Parameters
dim (str, Iterable of Hashable, "..." or None, default: None) – Name of dimension[s] along which to apply
var
. For e.g.dim="x"
ordim=["x", "y"]
. If “…” or None, will reduce over all dimensions.skipna (bool or None, optional) – If True, skip missing values (as marked by NaN). By default, only skips missing values for float dtypes; other dtypes either do not have a sentinel missing value (int) or
skipna=True
has not been implemented (object, datetime64 or timedelta64).ddof (int, default: 0) – “Delta Degrees of Freedom”: the divisor used in the calculation is
N - ddof
, whereN
represents the number of elements.keep_attrs (bool or None, optional) – If True,
attrs
will be copied from the original object to the new one. If False, the new object will be returned without attributes.**kwargs (Any) – Additional keyword arguments passed on to the appropriate array function for calculating
var
on this object’s data. These could include dask-specific kwargs likesplit_every
.
- Returns
reduced – New DataArray with
var
applied to its data and the indicated dimension(s) removed- Return type
DataArray
See also
numpy.var
,dask.array.var
,Dataset.var
- agg
User guide on reduction or aggregation operations.
Notes
Non-numeric variables will be removed prior to reducing.
Examples
>>> da = xr.DataArray( ... np.array([1, 2, 3, 0, 2, np.nan]), ... dims="time", ... coords=dict( ... time=("time", pd.date_range("2001-01-01", freq="M", periods=6)), ... labels=("time", np.array(["a", "b", "c", "c", "b", "a"])), ... ), ... ) >>> da <xarray.DataArray (time: 6)> array([ 1., 2., 3., 0., 2., nan]) Coordinates: * time (time) datetime64[ns] 2001-01-31 2001-02-28 ... 2001-06-30 labels (time) <U1 'a' 'b' 'c' 'c' 'b' 'a'
>>> da.var() <xarray.DataArray ()> array(1.04)
Use
skipna
to control whether NaNs are ignored.>>> da.var(skipna=False) <xarray.DataArray ()> array(nan)
Specify
ddof=1
for an unbiased estimate.>>> da.var(skipna=True, ddof=1) <xarray.DataArray ()> array(1.3)
- property variable: xarray.core.variable.Variable#
Low level interface to the Variable object for this DataArray.
- weighted(weights: DataArray) DataArrayWeighted #
Weighted DataArray operations.
- Parameters
weights (DataArray) – An array of weights associated with the values in this Dataset. Each value in the data contributes to the reduction operation according to its associated weight.
Notes
weights
must be a DataArray and cannot contain missing values. Missing values can be replaced byweights.fillna(0)
.- Return type
core.weighted.DataArrayWeighted
See also
Dataset.weighted
- comput.weighted
User guide on weighted array reduction using
weighted()
- xarray-tutorial:fundamentals/03.4_weighted
Tutorial on Weighted Reduction using
weighted()
- where(cond: Any, other: Any = <NA>, drop: bool = False) Self #
Filter elements from this object according to a condition.
Returns elements from ‘DataArray’, where ‘cond’ is True, otherwise fill in ‘other’.
This operation follows the normal broadcasting and alignment rules that xarray uses for binary arithmetic.
- Parameters
cond (DataArray, Dataset, or callable) – Locations at which to preserve this object’s values. dtype must be bool. If a callable, the callable is passed this object, and the result is used as the value for cond.
other (scalar, DataArray, Dataset, or callable, optional) – Value to use for locations in this object where
cond
is False. By default, these locations are filled with NA. If a callable, it must expect this object as its only parameter.drop (bool, default: False) – If True, coordinate labels that only correspond to False values of the condition are dropped from the result.
- Returns
Same xarray type as caller, with dtype float64.
- Return type
DataArray or Dataset
Examples
>>> a = xr.DataArray(np.arange(25).reshape(5, 5), dims=("x", "y")) >>> a <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 16, 17, 18, 19], [20, 21, 22, 23, 24]]) Dimensions without coordinates: x, y
>>> a.where(a.x + a.y < 4) <xarray.DataArray (x: 5, y: 5)> array([[ 0., 1., 2., 3., nan], [ 5., 6., 7., nan, nan], [10., 11., nan, nan, nan], [15., nan, nan, nan, nan], [nan, nan, nan, nan, nan]]) Dimensions without coordinates: x, y
>>> a.where(a.x + a.y < 5, -1) <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, -1], [10, 11, 12, -1, -1], [15, 16, -1, -1, -1], [20, -1, -1, -1, -1]]) Dimensions without coordinates: x, y
>>> a.where(a.x + a.y < 4, drop=True) <xarray.DataArray (x: 4, y: 4)> array([[ 0., 1., 2., 3.], [ 5., 6., 7., nan], [10., 11., nan, nan], [15., nan, nan, nan]]) Dimensions without coordinates: x, y
>>> a.where(lambda x: x.x + x.y < 4, lambda x: -x) <xarray.DataArray (x: 5, y: 5)> array([[ 0, 1, 2, 3, -4], [ 5, 6, 7, -8, -9], [ 10, 11, -12, -13, -14], [ 15, -16, -17, -18, -19], [-20, -21, -22, -23, -24]]) Dimensions without coordinates: x, y
>>> a.where(a.x + a.y < 4, drop=True) <xarray.DataArray (x: 4, y: 4)> array([[ 0., 1., 2., 3.], [ 5., 6., 7., nan], [10., 11., nan, nan], [15., nan, nan, nan]]) Dimensions without coordinates: x, y
See also
numpy.where
corresponding numpy function
where
equivalent function
- property xindexes: xarray.core.indexes.Indexes#
Mapping of
Index
objects used for label based indexing.