API
The FittingData type
FittingObjectiveFunctions.FittingData — Typemutable struct FittingDataData type for fitting data.
This struct is only a container to check consistency and is not performance relevant, hence the mutability.
Fields
independent: Array of data points for the independent variable.dependent: Array of data points for the dependent variable.errors: Array of measurement errors for the dependent variable.distributions: Distribution(s) for the uncertainty of the dependent variable. Can be a function or an array of functions (one for each data point).
Elements with the same index belong together, i.e. define a measurement:
(independent[i], dependent[i], errors[i], distributions[i])Constructors
FittingData(X,Y)FittingData(X,Y,ΔY;distributions = (y,m,Δy) -> exp(-(y-m)^2/(2*Δy^2))/(sqrt(2*pi) * Δy))Distributions
The distributions must have the signature (y,m,Δy), where y is the dependent variable, m is the result of the model function and Δy is the error of the dependent variable. If the distributions are not specified, a normal distribution is used:
(y,m,Δy) -> exp(-(y-m)^2/(2*Δy^2))/(sqrt(2*pi) * Δy)The ModelFunctions type
FittingObjectiveFunctions.ModelFunctions — Typemutable struct ModelFunctionsMutable type to collect model functions (and the respective partial derivatives) to construct objective functions.
This struct is only a container to check consistency and is not performance relevant, hence the mutability.
Fields
model: The model function. Must have the signature(x,λ), wherexis the independent variable, andλis the parameter (array).partials: Array of partial derivative functions (one for each parameter array element). Must have the same signature(x,λ)as the model function.
Constructor
ModelFunctions(model, partials = nothing)Examples
julia> ModelFunctions((x,λ)-> λ*x) julia> ModelFunctions((x,λ)-> λ*x, partials = [(x,λ)-> x]) julia> ModelFunctions((x,λ)-> λ[1]*x+λ[2], partials = [(x,λ)-> x, (x,λ)-> 1]) FittingObjectiveFunctions.consistency_check — Functionconsistency_check(fitting_data::FittingData,model::ModelFunctions)Test fitting_data and model, e.g. after mutation.
consistency_check(fitting_data::FittingData,model::ModelFunctions,λ)Test if all functions can be evaluated with the parameter (array) λ. Also, test fitting_data and model, e.g. after mutation.
Least squares objective
FittingObjectiveFunctions.lsq_objective — Functionlsq_objective(data::FittingData,model::ModelFunctions)Return the least squares objective as function λ -> lsq(λ).
Analytical expression
- independent data points $x_i$
- dependent data points $y_i$
- errors $\Delta y_i$
- model function $m$
\[\text{lsq}(\lambda) = \sum_{i=1}^N \frac{(y_i - m(x_i,\lambda))^2}{\Delta y_i^2}\]
FittingObjectiveFunctions.lsq_partials — Functionlsq_partials(data::FittingData,model::ModelFunctions)Return the partial derivatives of the least squares objective function ob(λ) as array of functions [λ->∂_1 ob(λ),…,λ->∂_n ob(λ)] .
- The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the
ModelFunctionsobjectmodel.
Analytical expression
- independent data points: $x_i$
- dependent data points: $y_i$
- errors: $\Delta y_i$
- model function: $m$
- partial derivatives of model function in: $\frac{\partial}{\partial \lambda_\mu}m(x,\lambda)$
\[ \frac{\partial}{\partial \lambda_\mu} \text{lsq}(\lambda) = \sum_{i=1}^N \frac{ 2 \cdot (m(x_i,\lambda) - y_i) \cdot \frac{\partial}{\partial \lambda_\mu} m(x,\lambda)}{\Delta y_i^2}\]
FittingObjectiveFunctions.lsq_gradient — Functionlsq_gradient(data::FittingData,model::ModelFunctions)Return the gradient of the least squares objective function ob(λ) as function (gradient,λ)->grad!(gradient,λ) .
The gradient function
grad!mutates (for performance) and returns thegradient. The elements ofgradientdo not matter, but the type and length must fit.The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the
ModelFunctionsobjectmodel.
Analytical expression
- independent data points: $x_i$
- dependent data points: $y_i$
- errors: $\Delta y_i$
- model function: $m$
- partial derivatives of model function in: $\frac{\partial}{\partial \lambda_\mu}m(x,\lambda)$
\[\nabla \text{lsq}(\lambda) = \sum_{\mu} \left(\sum_{i=1}^N \frac{ 2 \cdot (m(x_i,\lambda) - y_i) \cdot \frac{\partial}{\partial \lambda_\mu}m(x,\lambda) }{\Delta y_i^2} \right) \vec{e}_\mu\]
Posterior objective
FittingObjectiveFunctions.posterior_objective — Functionposterior_objective(data::FittingData,
model::Function,distribution::Function,
prior = λ-> 1
)Return the unnormalized posterior density as function λ->p(λ).
Using the default prior λ-> 1, e.g. py passing only the first two arguments, leads to the likelihood objective for a maximum likelihood fit.
Analytical expression
- independent data points $x_i$
- dependent data points $y_i$
- errors $\Delta y_i$
- model function $m$
- $y$-uncertainty distributions: $q_i$
- prior distribution: $p_0$
\[p(\lambda) = p_0(\lambda) \cdot \prod_{i=1}^N q_i(y_i,m(x_i,\lambda),\Delta y_i)\]
FittingObjectiveFunctions.log_posterior_objective — Functionlog_posterior_objective(data::FittingData,
model::ModelFunctions,
log_prior::Function = log_uniform_prior
)Return the logarithmic posterior density as function λ->L_p(λ).
- The
y-uncertainty distributions of theFittingDataobjectdataandlog-priormust be specified in the logarithmic form.
- Using the default prior, e.g. py passing only the first two arguments, leads to the logarithmic likelihood objective for a maximum likelihood fit.
Analytical expression
- independent data points $x_i$
- dependent data points $y_i$
- errors $\Delta y_i$
- model function $m$
- logarithmic $y$-uncertainty distributions: $L_i$
- logarithmic prior distribution: $L_0$
\[L_p(\lambda) = L_0(\lambda) + \sum_{i=1}^N L_i(y_i,m(x_i,\lambda),\Delta y_i)\]
FittingObjectiveFunctions.log_posterior_partials — Functionlog_posterior_partials(data::FittingData,
model::ModelFunctions,
log_distribution_derivatives,
prior_partials::Union{Nothing,AbstractArray{Function,N}} = nothing
)Return the partial derivatives of the log-posterior distribution L_p(λ) as array of functions [λ->∂_1 L_p(λ),…,λ->∂_n L_p(λ)].
The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the
ModelFunctionsobjectmodel.log_distribution_derivativescan either be a function $\frac{\partial}{\partial m} L(y,m,\Delta y)$ (same derivative for all distributions), or an array of functions $\left[\frac{\partial}{\partial m} L_1(y,m,\Delta y),\ldots,\frac{\partial}{\partial m} L_n(y,m,\Delta y) \right]$prior_partialscan either benothing(for the log-likelihood), or an array of functions $\left[\frac{\partial}{\partial \lambda_1} L_0(λ),\ldots,\frac{\partial}{\partial \lambda_n} L_0(λ) \right]$.
Analytical expression
- independent data points $x_i$
- dependent data points $y_i$
- errors $\Delta y_i$
- model function $m$
- logarithmic $y$-uncertainty distributions: $L_i$
- logarithmic prior distribution: $L_0$
- partial derivatives of model function: $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$
- partial derivatives of the logarithmic $y$-uncertainty distributions: $\frac{\partial}{\partial m} L_i(y,m,\Delta y)$
- partial derivatives of the logarithmic prior: $\frac{\partial}{\partial \lambda_\mu} L_0(λ)$
\[\frac{\partial}{\partial \lambda_\mu} L_p(\lambda) = \frac{\partial}{\partial \lambda_\mu} L_0(\lambda) + \sum_{i=1}^N \frac{\partial}{\partial m} L_i(y_i, m(x_i,\lambda), \Delta y_i)\cdot \frac{\partial}{\partial \lambda_\mu} m(x_i,\lambda)\]
FittingObjectiveFunctions.log_posterior_gradient — Functionlog_posterior_gradient(data::FittingData,
model::ModelFunctions,
log_distribution_derivatives,
prior_partials::Union{Nothing,AbstractArray{Function,N}} = nothing
)Return the gradient of the log-posterior distribution L_p(λ) as function (gradient,λ)->grad!(gradient,λ) .
The gradient function
grad!mutates (for performance) and returns thegradient. The elements ofgradientdo not matter, but the type and length must fit.The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the
ModelFunctionsobjectmodel.log_distribution_derivativescan either be a function $\frac{\partial}{\partial m} L(y,m,\Delta y)$ (same derivative for all distributions), or an array of functions $\left[\frac{\partial}{\partial m} L_1(y,m,\Delta y),\ldots,\frac{\partial}{\partial m} L_n(y,m,\Delta y) \right]$prior_partialscan either benothing(for the log-likelihood), or an array of functions $\left[\frac{\partial}{\partial \lambda_1} L_0(λ),\ldots,\frac{\partial}{\partial \lambda_n} L_0(λ) \right]$.
Analytical expression
- independent data points $x_i$
- dependent data points $y_i$
- errors $\Delta y_i$
- model function $m$
- logarithmic $y$-uncertainty distributions: $L_i$
- logarithmic prior distribution: $L_0$
- partial derivatives of model function: $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$
- partial derivatives of the logarithmic $y$-uncertainty distributions: $\frac{\partial}{\partial m} L_i(y,m,\Delta y)$
- partial derivatives of the logarithmic prior: $\frac{\partial}{\partial \lambda_\mu} L_0(λ)$
\[\nabla L_p(\lambda) = \sum_{\mu} \left( \frac{\partial}{\partial \lambda_\mu} L_0(\lambda) + \sum_{i=1}^N \frac{\partial}{\partial m} L_i(y_i, m(x_i,\lambda), \Delta y_i)\cdot \frac{\partial}{\partial \lambda_\mu} m(x_i,\lambda) \right) \vec{e}_\mu\]