API

The FittingData type

FittingObjectiveFunctions.FittingDataType
mutable struct FittingData

Data type for fitting data.

This struct is only a container to check consistency and is not performance relevant, hence the mutability.

Fields

  • independent: Array of data points for the independent variable.
  • dependent: Array of data points for the dependent variable.
  • errors: Array of measurement errors for the dependent variable.
  • distributions: Distribution(s) for the uncertainty of the dependent variable. Can be a function or an array of functions (one for each data point).

Elements with the same index belong together, i.e. define a measurement:

(independent[i], dependent[i], errors[i], distributions[i])

Constructors

FittingData(X,Y)
FittingData(X,Y,ΔY;distributions = (y,m,Δy) -> exp(-(y-m)^2/(2*Δy^2))/(sqrt(2*pi) * Δy))

Distributions

The distributions must have the signature (y,m,Δy), where y is the dependent variable, m is the result of the model function and Δy is the error of the dependent variable. If the distributions are not specified, a normal distribution is used:

(y,m,Δy) -> exp(-(y-m)^2/(2*Δy^2))/(sqrt(2*pi) * Δy)
source

The ModelFunctions type

FittingObjectiveFunctions.ModelFunctionsType
mutable struct ModelFunctions

Mutable type to collect model functions (and the respective partial derivatives) to construct objective functions.

This struct is only a container to check consistency and is not performance relevant, hence the mutability.

Fields

  • model: The model function. Must have the signature (x,λ), where x is the independent variable, and λ is the parameter (array).
  • partials: Array of partial derivative functions (one for each parameter array element). Must have the same signature (x,λ) as the model function.

Constructor

ModelFunctions(model, partials = nothing)

Examples

julia> ModelFunctions((x,λ)-> λ*x)	
julia> ModelFunctions((x,λ)-> λ*x, partials = [(x,λ)-> x])	
julia> ModelFunctions((x,λ)-> λ[1]*x+λ[2], partials = [(x,λ)-> x, (x,λ)-> 1])	
source
FittingObjectiveFunctions.consistency_checkFunction
consistency_check(fitting_data::FittingData,model::ModelFunctions)

Test fitting_data and model, e.g. after mutation.

source
consistency_check(fitting_data::FittingData,model::ModelFunctions,λ)

Test if all functions can be evaluated with the parameter (array) λ. Also, test fitting_data and model, e.g. after mutation.

source

Least squares objective

FittingObjectiveFunctions.lsq_objectiveFunction
lsq_objective(data::FittingData,model::ModelFunctions)

Return the least squares objective as function λ -> lsq(λ).

Analytical expression

  • independent data points $x_i$
  • dependent data points $y_i$
  • errors $\Delta y_i$
  • model function $m$

\[\text{lsq}(\lambda) = \sum_{i=1}^N \frac{(y_i - m(x_i,\lambda))^2}{\Delta y_i^2}\]

source
FittingObjectiveFunctions.lsq_partialsFunction
lsq_partials(data::FittingData,model::ModelFunctions)

Return the partial derivatives of the least squares objective function ob(λ) as array of functions [λ->∂_1 ob(λ),…,λ->∂_n ob(λ)] .

  • The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the ModelFunctions object model.

Analytical expression

  • independent data points: $x_i$
  • dependent data points: $y_i$
  • errors: $\Delta y_i$
  • model function: $m$
  • partial derivatives of model function in: $\frac{\partial}{\partial \lambda_\mu}m(x,\lambda)$

\[ \frac{\partial}{\partial \lambda_\mu} \text{lsq}(\lambda) = \sum_{i=1}^N \frac{ 2 \cdot (m(x_i,\lambda) - y_i) \cdot \frac{\partial}{\partial \lambda_\mu} m(x,\lambda)}{\Delta y_i^2}\]

source
FittingObjectiveFunctions.lsq_gradientFunction
lsq_gradient(data::FittingData,model::ModelFunctions)

Return the gradient of the least squares objective function ob(λ) as function (gradient,λ)->grad!(gradient,λ) .

  • The gradient function grad! mutates (for performance) and returns the gradient. The elements of gradient do not matter, but the type and length must fit.

  • The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the ModelFunctions object model.

Analytical expression

  • independent data points: $x_i$
  • dependent data points: $y_i$
  • errors: $\Delta y_i$
  • model function: $m$
  • partial derivatives of model function in: $\frac{\partial}{\partial \lambda_\mu}m(x,\lambda)$

\[\nabla \text{lsq}(\lambda) = \sum_{\mu} \left(\sum_{i=1}^N \frac{ 2 \cdot (m(x_i,\lambda) - y_i) \cdot \frac{\partial}{\partial \lambda_\mu}m(x,\lambda) }{\Delta y_i^2} \right) \vec{e}_\mu\]

source

Posterior objective

FittingObjectiveFunctions.posterior_objectiveFunction
posterior_objective(data::FittingData, 
	model::Function,distribution::Function, 
	prior = λ-> 1
)

Return the unnormalized posterior density as function λ->p(λ).

Using the default prior λ-> 1, e.g. py passing only the first two arguments, leads to the likelihood objective for a maximum likelihood fit.

Analytical expression

  • independent data points $x_i$
  • dependent data points $y_i$
  • errors $\Delta y_i$
  • model function $m$
  • $y$-uncertainty distributions: $q_i$
  • prior distribution: $p_0$

\[p(\lambda) = p_0(\lambda) \cdot \prod_{i=1}^N q_i(y_i,m(x_i,\lambda),\Delta y_i)\]

source
FittingObjectiveFunctions.log_posterior_objectiveFunction
log_posterior_objective(data::FittingData,
	model::ModelFunctions, 
	log_prior::Function = log_uniform_prior
)

Return the logarithmic posterior density as function λ->L_p(λ).

  • The y-uncertainty distributions of the FittingData object data and log-prior must be specified in the logarithmic form.
  • Using the default prior, e.g. py passing only the first two arguments, leads to the logarithmic likelihood objective for a maximum likelihood fit.

Analytical expression

  • independent data points $x_i$
  • dependent data points $y_i$
  • errors $\Delta y_i$
  • model function $m$
  • logarithmic $y$-uncertainty distributions: $L_i$
  • logarithmic prior distribution: $L_0$

\[L_p(\lambda) = L_0(\lambda) + \sum_{i=1}^N L_i(y_i,m(x_i,\lambda),\Delta y_i)\]

source
FittingObjectiveFunctions.log_posterior_partialsFunction
log_posterior_partials(data::FittingData,
	model::ModelFunctions,
	log_distribution_derivatives, 
	prior_partials::Union{Nothing,AbstractArray{Function,N}} = nothing
)

Return the partial derivatives of the log-posterior distribution L_p(λ) as array of functions [λ->∂_1 L_p(λ),…,λ->∂_n L_p(λ)].

  • The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the ModelFunctions object model.

  • log_distribution_derivatives can either be a function $\frac{\partial}{\partial m} L(y,m,\Delta y)$ (same derivative for all distributions), or an array of functions $\left[\frac{\partial}{\partial m} L_1(y,m,\Delta y),\ldots,\frac{\partial}{\partial m} L_n(y,m,\Delta y) \right]$

  • prior_partials can either be nothing (for the log-likelihood), or an array of functions $\left[\frac{\partial}{\partial \lambda_1} L_0(λ),\ldots,\frac{\partial}{\partial \lambda_n} L_0(λ) \right]$.

Analytical expression

  • independent data points $x_i$
  • dependent data points $y_i$
  • errors $\Delta y_i$
  • model function $m$
  • logarithmic $y$-uncertainty distributions: $L_i$
  • logarithmic prior distribution: $L_0$
  • partial derivatives of model function: $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$
  • partial derivatives of the logarithmic $y$-uncertainty distributions: $\frac{\partial}{\partial m} L_i(y,m,\Delta y)$
  • partial derivatives of the logarithmic prior: $\frac{\partial}{\partial \lambda_\mu} L_0(λ)$

\[\frac{\partial}{\partial \lambda_\mu} L_p(\lambda) = \frac{\partial}{\partial \lambda_\mu} L_0(\lambda) + \sum_{i=1}^N \frac{\partial}{\partial m} L_i(y_i, m(x_i,\lambda), \Delta y_i)\cdot \frac{\partial}{\partial \lambda_\mu} m(x_i,\lambda)\]

source
FittingObjectiveFunctions.log_posterior_gradientFunction
log_posterior_gradient(data::FittingData,
	model::ModelFunctions, 
	log_distribution_derivatives, 
	prior_partials::Union{Nothing,AbstractArray{Function,N}} = nothing
)

Return the gradient of the log-posterior distribution L_p(λ) as function (gradient,λ)->grad!(gradient,λ) .

  • The gradient function grad! mutates (for performance) and returns the gradient. The elements of gradient do not matter, but the type and length must fit.

  • The partial derivatives $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$ of the model function must be specified in the ModelFunctions object model.

  • log_distribution_derivatives can either be a function $\frac{\partial}{\partial m} L(y,m,\Delta y)$ (same derivative for all distributions), or an array of functions $\left[\frac{\partial}{\partial m} L_1(y,m,\Delta y),\ldots,\frac{\partial}{\partial m} L_n(y,m,\Delta y) \right]$

  • prior_partials can either be nothing (for the log-likelihood), or an array of functions $\left[\frac{\partial}{\partial \lambda_1} L_0(λ),\ldots,\frac{\partial}{\partial \lambda_n} L_0(λ) \right]$.

Analytical expression

  • independent data points $x_i$
  • dependent data points $y_i$
  • errors $\Delta y_i$
  • model function $m$
  • logarithmic $y$-uncertainty distributions: $L_i$
  • logarithmic prior distribution: $L_0$
  • partial derivatives of model function: $\frac{\partial}{\partial \lambda_\mu} m(x,\lambda)$
  • partial derivatives of the logarithmic $y$-uncertainty distributions: $\frac{\partial}{\partial m} L_i(y,m,\Delta y)$
  • partial derivatives of the logarithmic prior: $\frac{\partial}{\partial \lambda_\mu} L_0(λ)$

\[\nabla L_p(\lambda) = \sum_{\mu} \left( \frac{\partial}{\partial \lambda_\mu} L_0(\lambda) + \sum_{i=1}^N \frac{\partial}{\partial m} L_i(y_i, m(x_i,\lambda), \Delta y_i)\cdot \frac{\partial}{\partial \lambda_\mu} m(x_i,\lambda) \right) \vec{e}_\mu\]

source