Mathematical Structure

Lie, Symplectic, and Poisson Groupoids and Their Lie Algebroids

C.-M. Marle , in Encyclopedia of Mathematical Physics, 2006

Introduction

Groupoids are mathematical structures able to describe symmetry properties more general than those described by groups. They were introduced (and named) by H Brandt in 1926. Around 1950, Charles Ehresmann used groupoids with additional structures (topological and differentiable) as essential tools in topology and differential geometry. In recent years, Mickael Karasev, Alan Weinstein, and Stanisław Zakrzewski independently discovered that symplectic groupoids can be used for the construction of noncommutative deformations of the algebra of smooth functions on a manifold, with potential applications to quantization. Poisson groupoids were introduced by Alan Weinstein as generalizations of both Poisson Lie groups and symplectic groupoids.

We present here the main definitions and first properties relative to groupoids, Lie groupoids, Lie algebroids, symplectic and Poisson groupoids and their Lie algebroids.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0125126662001450

The Relational Paradigm

Tom Johnston , in Bitemporal Data, 2014

Tables and Columns

A relational table is an ordered set of columns. Because these columns are sequentially ordered, we can identify each one, and distinguish it from all the other columns of that table.

In a database, we use column names to identify each column. But as mathematical structures, as the sets on which a Cartesian Product is defined, which is itself the mathematical object on which relations are defined, these columns have no names.

One way to define a set is to list its members, separated by commas, and then delimit that list. To indicate a sequentially ordered set, I will delimit the list with brackets ( [….] ), and otherwise with braces ( {….} ). Using these conventions, Figure 3.1 shows an ordered set of three sets.

Figure 3.1. An Ordered Set of Three Sets: a Minimalist View.

In Figure 3.1, the ordered set is SX. The brackets indicate that the sets on which SX is defined – S1, S2 and S3 – occur in a specific sequence. S1, S2 and S3 are represented as empty rectangles because as yet we know nothing about them, and nothing about their members.

SX is the mathematical structure on which, when interpreted, one or more database tables may be defined. I will consider only one such table, and will call it TX. The sets S1, S2 and S3 on which SX is defined are the mathematical structures on which, when interpreted, the columns of TX – call them TX.C1, TX.C2 and TX.C3 – will be defined. 1

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124080676000036

Temporal Qualification in Artificial Intelligence

Han Reichgelt , Lluis Vila , in Foundations of Artificial Intelligence, 2005

The model of time

Modeling time as a mathematical structure requires deciding (i) the class or classes of the basic objects that time is composed of, such as instants, intervals, etc. (i.e. the time ontology) and (ii) the properties of these time sets, such as dense vs. discrete, bounded vs. unbounded, partial vs total order, etc. (i.e. the time topology).

This issue is discussed in chapter Theories of Time and Temporal Incidence in this handbook and we shall remain silent on what the best model of time is. When introducing a temporal qualification method we shall merely assume we are given a time structure

T 1 , , T n t , F t i m e , R t i m e

where each 𝒯i is a non-empty set of time objects, &#x02131time is a set of functions defined over them, and ℛtime is a set of relations over them. For instance, when formalizing our example we shall take a time structure with three sets: a set of time points that is isomorphic to the natural numbers (where the grain size is one day), the set of ordered pairs of natural numbers and a set of temporal spans or durantions that is isomorphic to the integers. &#x02131time contains functions on these sets Intime contains relations among them

The decision about the model of time to adopt is independent of the temporal qualification method although it has an impact on the formulas one can write and the formulas one can prove. The temporal qualification method one selects will determine how the model of time adopted will be embedded in the temporal reasoning system. The completeness of a proof theory depends on the availability of a theory that captures the properties of the model of time and allows the proof system to infer all statements valid in the time structure. Such a theory, the theory of time, may have the form of a set of axioms written in the temporal language that will include symbols denoting functions in &#x02131time and relations in &#x0211B;time . For example, the transitivity of ordering relationship (denoted by <1) over &#x1D4AF; 1 can be captured by the axiom

t 1 , t 2 , t 3 [ t 1 1 t 2 t 2 1 t 1 1 t 3 ]

However, depending on the time structure and the expressive power of underlying logic it may be impossible to write a complete set of axioms in our language.

An alternative way to capture the theory of time is through an appropriate set of inference rules, typically at least one for each temporal function and relation, which indicate how these expressions can be used in generating proofs. Of course, this choice requires much more effort than the previous one.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1574652605800070

The Luenberger Observer: Correcting Sensor Problems

George Ellis , in Observers in Control Systems, 2002

What Is a Luenberger Observer?

An observer is a mathematical structure that combines sensor output and plant excitation signals with models of the plant and sensor. An observer provides feedback signals that are superior to the sensor output alone. The topic of this book is the Luenberger observer, which combines five elements:

a sensor output, Y(S),

a power converter output (plant excitation), Pc (S),

a model (estimation) of the plant, GPEst (S),

a model of the sensor, GSEst (S), and

a PI or PID observer compensator, Gco (S).

The general form of the Luenberger observer is shown in Figure 4-1.

Figure 4-1. General form of the Luenberger observer.

4.1.1 Observer Terminology

The following naming conventions will be used.Estimated will describe components of the system model. For example, the estimated plant is a model of the plant that is run by the observer.Observed will apply to signals derived from an observer; thus, the state (CO ) and the sensor (YO ) signals are observed in Figure 4-1. Observer models and their parameters will be referred to as estimated. Transfer functions will normally be named G(S) with identifying subscripts: GP (S) is the plant transfer function and GPEst (S) is the estimated or modeled plant.

4.1.2 Building the Luenberger Observer

This section describes the construction of a Luenberger observer from a traditional control system, adding components step by step. Start with the traditional control system shown in Figure 4-2. Ideally, the control loop would use the actual state, C(S), as feedback. However, access to the state comes through the sensor, which produces Y(S), the feedback variable. The sensor transfer function, Gs (S), often ignored in the presentation of control systems, is the focus here. Typical problems caused by sensors are phase lag, attenuation, and noise.

Figure 4-2. Traditional control system.

Phase lag and attenuation can be caused by the physical construction of the sensor or by sensor filters, which are often introduced to attenuate noise. The key detriment of phase lag is the reduction of loop stability. Noise can be generated by several forms of electromagnetic interference (EMI). Noise causes random behavior in the control system, corrupting the output and wasting power. All of these undesirable characteristics are represented by the term Gs (S) in Figure 4-2. The ideal sensor can be defined as GS-WEAL (S)=1.

The first step in dealing with sensor problems is to select the best sensor for the application. Compared to using an observer, selecting a faster or more accurate sensor will provide benefits that are more predictable and more easily realized. However, limitations such as cost, size, and reliability will usually force the designer to accept sensors with undesirable characteristics, no matter how careful the selection process. The assumption from here forward will be that the sensor in use is appropriate for a given machine or process; the goal of the observer is to make the best use of that sensor. In other words, the first goal of the Luenberger observer will be to minimize the effects of Gs (S)≠1.

For the purposes of this development, only the plant and sensor, as shown in Figure 4-3, need to be considered. Note that the traditional control system ignores the effect of Gs (S)≠ 1;Y(S), the sensor output, is used in place of the actual state under control, C(S). But Y(S) is not C(S); the temperature of a component is not the temperature indicated by the sensor. Phase lag from sensors often is a primary contributor to loop instability; noise from sensors often demands correction by the addition of filters in the control loop, again contributing phase lag and ultimately reducing margins of stability.

Figure 4-3. Plant and sensor.

4.1.2.1 Two Ways to Avoid Gs (S)≠1

So, how can the effects of G s (S)≠ 1 be removed? One alternative is to follow the sensed signal with the inverse of the sensor transfer function: GSEst −1 s). This is shown in Figure 4-4. On paper, such a solution appears workable. Unfortunately, the nature of Gs (S) makes taking its inverse impractical. For example, if Gs (S) were a low-pass filter, as is common, its inverse would require a derivative as shown in Equation 4.1. Derivatives are well known for being too noisy to be practical in most cases; high-frequency noise, such as that from quantization and EMI, processed by a derivative generates excessive high-frequency output noise.

Figure 4-4. An impractical way to estimate C(S): Adding the inverse sensor transfer function.

(4.1) if G S E s t ( s ) = K E S T s + K E S T , then G S E s i 1 ( s ) = 1 + s K E S T .

Another alternative to avoid the effects of Gs (S)≠ 1 is to simulate a model of the plant in software as the control loop is being executed. The signal from the power converter output is applied to a plant model, GPEst (S), in parallel with the actual plant. This is shown in Figure 4-5. Such a solution is subject to drift because most control-system plants contain at least one integrator; even small differences between the physical plant and the model plant will cause the estimated state, CEst (S), to drift. As a result, this solution is also impractical.

Figure 4-5. Another impractical solution: Deriving the controlled state from a model of the plant.

The solution of Figure 4-4, which depends wholly on the sensor, works well at low frequency but produces excessive noise at high frequency. The solution of Figure 4-5, which depends wholly on the model and the power converter output signal, works well at high frequency but drifts in the lower frequencies. The Luenberger observer, as will be shown in the next section, can be viewed as combining the best parts of these two solutions.

4.1.2.2 Simulating the Plant and Sensor in Real Time

Continuing the construction of the Luenberger observer, augment the structure of Figure 4-5 to run a model of the plant and sensor in parallel with the physical plant and sensor. This configuration, shown in Figure 4-6, drives a signal representing the power conversion output through the plant model and through the sensor model to generate the observed sensor output, YO (S). Assume for the moment that the models are exact replicas of the physical components. In this case, YO (S)=Y(S), or, equivalently, EO (S)=0. In such a case, the observed state, CO (S), is an accurate representation of the actual state. So CO (S) could be used to close the control loop; the phase lag of Gs (S) would have no effect on the system. This achieves the first goal of observers, the elimination of the effects of Gs (S) ≠ 1, but only for the unrealistic case where the model is a perfect representation of the actual plant.

Figure 4-6. Running models in parallel with the actual components.

4.1.2.3 Adding the Observer Compensator

In any realistic system EO (S) will not be zero because the models will not be perfect representations of their physical counterparts and because of disturbances. The final step in building the Luenberger observer is to route the error signal back to the model to drive the error toward zero. This is shown in Figure 4-7. The observer compensator, Gco (S), is usually a high-gain PI or PID control law.

Figure 4-7. The Luenberger observer.

The gains of Gco (S) are often set as high as possible so that even small errors drive through the observer compensator to minimize the difference between Y(S) and Yo (S). If this error is small, the observed state, CO (S), becomes a reasonable representation of the actual state, C(S); certainly, it can be much more accurate than the sensor output, Y(S).

One application of the Luenberger observer is to use the observed state to close the control loop; this is shown in Figure 4-8, which compares to the traditional control system of Figure 4-2. The sensor output is no longer used to close the loop; its sole function is to drive the observer to form an observed state. Typically, most of thephase lag and attenuation of the sensor can be removed, at least in the frequency range of interest for the control loop.

Figure 4-8. A Luenberger observer-based control loop.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122374722500055

Advances in Computers: Improving the Web

Dalibor Mitrović , ... Christian Breiteneder , in Advances in Computers, 2010

3.2 Building Blocks of Features

In this section, we analyze the mathematical structure of selected features and identify common components (building blocks). This approach offers a novel perspective on content-based audio features that reveals their structural similarities.

We decompose audio features into a sequence of basic mathematical operations similarly to Mierswa and Morik [25]. We distinguish between three basic groups of functions: transformations, filters, and aggregations. Transformations are functions that map data (numeric values) from one domain into another domain. An example for a transformation is the discrete Fourier transform that maps data from temporal domain into frequency domain and reveals the frequency distribution of the signal. It is important that the transformation from one domain into the other changes the interpretation of the data. The following domains are frequently used in audio feature extraction.

Temporal domain. The temporal domain represents the signal changes over time (the waveform). The abscissa of a temporal representation is the sampled time domain and the ordinate corresponds to the amplitude of the sampled signal. While this domain is the basis for feature extraction algorithms the signals are often transformed into more expressive domains that are better suited for audio analysis.

Frequency domain. The frequency domain reveals the spectral distribution of a signal and allows for example the analysis of harmonic structures, bandwidth, and tonality. For each frequency (or frequency band) the domain provides the corresponding magnitude and phase. Popular transformations from time to frequency domain are Fourier (DFT), Cosine (DCT), and Wavelet transform. Another widely used way to transform a signal from temporal to frequency domain is the application of banks of band-pass filters with, for example, Mel- and Bark-scaled filters to the time domain signal. Note that Fourier, Cosine, and Wavelet transforms may also be considered as filter banks.

Correlation domain. The correlation domain represents temporal relationships between signals. For audio features especially the autocorrelation domain is of interest. The autocorrelation domain represents the correlation of a signal with a time-shifted version of the same signal for different time lags. It reveals repeating patterns and their periodicities in a signal and may be employed, for example for the estimation of the fundamental frequency of a signal.

Cepstral domain. The concept of cepstrum has been introduced by Bogert et al. [26]. A representation in cepstral domain is obtained by taking the Fourier transform of the logarithm of the magnitude of the spectrum. The second Fourier transform may be replaced by the inverse DFT, DCT, and inverse DCT. The Cosine transform better decorrelates the data than the Fourier transform and thus is often preferred. A cepstral representation is one way to compute an approximation of the shape (envelope) of the spectrum. Hence, cepstral features usually capture timbral information [13]. They are frequently applied in automatic speech recognition and audio fingerprinting.

Modulation frequency domain. The modulation frequency domain reveals information about the temporal modulations contained in a signal. A typical representation is the joint acoustic and modulation frequency graph which represents the temporal structure of a signal in terms of low-frequency amplitude modulations [24]. The abscissa represents modulation frequencies and the ordinate corresponds to acoustic frequencies. Another representation is the modulation spectrogram introduced by Greenberg and Kingsbury [27] which displays the distribution of slow modulations across time and frequency. Modulation information may be employed for the analysis of rhythmic structures in music [28] and noise-robust speech recognition [27, 29].

Reconstructed phase space. Audio signals such as speech and singing may show nonlinear (chaotic) phenomena that are hardly represented by the domains mentioned so far. The nonlinear dynamics of a system may be reconstructed by embedding the signal into a phase space. The reconstructed phase space is a high-dimensional space (usually d  >   3), where every point corresponds to a specific state of the system. The reconstructed phase space reveals the attractor of the system under the condition that the embedding dimension d has been chosen adequately. Features derived from the reconstructed phase space may estimate the degree of chaos in a dynamic system and are often applied in automatic speech recognition for the description of phonemes [30, 31].

Eigendomain. We consider a representation to be in eigendomain if it is spanned by eigen- or singular vectors. There are different transformations and decompositions that generate eigendomains in this sense, such as principal component analysis (PCA) and singular value decomposition (SVD). The (statistical) methods have in common that they decompose a mixture of variables into some canonical form, for example uncorrelated principal components in case of the PCA. Features in eigendomain have decorrelated or even statistically independent feature components. These representations enable easy and efficient reduction of data (e.g., by removing principal components with low eigenvalues).

Additionally to transformations, we define filters as the second group of operators. In the context of this chapter, we define a filter as a mapping of a set of numeric values into another set of numeric values residing in the same domain. In general, a filter changes the values of a given numeric series but not their number. Note that this definition of the term filter is broader than the definition usually employed in signal processing.

Simple filters are, for example, scaling, normalization, magnitude, square, exponential function, logarithm, and derivative of a set of numeric values. Other filters are quantization and thresholding. These operations have in common that they reduce the range of possible values of the original series.

We further consider the process of windowing (framing) as a filter. Windowing is simply the multiplication of a series of values with a weighting (window) function where all values inside the window are weighted according to the function and the values outside the window are set to zero. Windowing may be applied for (non)uniform scaling and for the extraction of frames from a signal (e.g., by repeated application of hamming windows).

Similarly, there are low-pass, high-pass, and band-pass filters. Filters in the domain of audio feature extraction are often based on Bark- [32], ERB- [33], and Mel-scale [34]. We consider the application of a filter (or a bank of filters) as a filter according to our definition, if the output of each filter is again a series of values (the subband signal). Note that a filter bank may also represent a transformation. In this case, the power of each subband is aggregated over time, which results in a spectrum of a signal. Consequently, a filter bank may be considered as both, a filter and a transformation, depending on its output.

The third category of operations are aggregations. An aggregation is a mapping of a series of values into a single scalar. The purpose of aggregations is the reduction of data, for example, the summarization of information from multiple subbands. Typical aggregations are mean, variance, median, sum, minimum, and maximum. A more comprehensive aggregation is a histogram. In this case each bin of the histogram corresponds to one aggregation. Similarly, binning of frequencies (e.g., spectral binning into Bark and Mel bands) is an aggregation.

A subgroup of aggregations are detectors. A detector reduces data by locating distinct points of interest in a value series, for example, peaks, zero crossings, and roots.

We assign each mathematical operation that occurs during feature extraction to one of the three proposed categories (see Section   5.1). These operations form the building blocks of features. We are able to describe the process of computation of a feature in a very compact way, by referring to these building blocks. As we will see, the number of different transformations, filters, and aggregations employed in audio feature extraction is relatively low, since most audio features share similar operations.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0065245810780037

21st European Symposium on Computer Aided Process Engineering

Heinz A. Preisig , in Computer Aided Chemical Engineering, 2011

3.1 Implementation

The here-described ontology has been mapped into a complete mathematical structure consisting of lists and equations. For the definition of the ontology a special editor has been generated which, using a wizard-type of approach, defines the various parts step by step building on the graph representation. An important feature in the defintion is that some of the information is being defined as part of the model definition. These objects are marked accordingly and on use of the ontology this information must be provided either from a model file containing this information or alternatively through direct interaction with the person using indirectly the ontology. For the lack of space the publication of the mathematical structure is being delayed.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444537119500213

Quantitative Systems Pharmacology

Fabrizio Bezzo , Federico Galvanin , in Computer Aided Chemical Engineering, 2018

Abstract

Physiological models are mathematical models characterized by a physiologically consistent mathematical structure (defined by the set of equations being used) and a set of model parameters to be estimated in the most precise and accurate way. However, systems in physiology and medicine are typically characterized by poor observability (i.e., possibility for the clinician to observe practically and quantify the relevant phenomena occurring in the body through clinical tests and investigations), high number of interacting and unmeasured variables (as an effect of the complexity of interactions), and poor controllability (i.e., limited capacity to drive the state of the system by acting on decision variables). All these factors may severely hinder the practical identifiability of these models, i.e., the possibility to estimate the set of parameters in a statistically satisfactory way from clinical data. Identifiability is a structural property of a model, but it is also determined by the amount of useful information that can be generated by clinical data. Hence, the importance of designing clinical protocols that allow estimating the model parameters in the quickest and more reliable way. In this chapter, we discuss how the identifiability of a physiological model can be characterized and analyzed, and how identifiability tests and model-based design of experiments (MBDoE) techniques can be exploited to tackle the identifiability issues arising from clinical tests.

A case study related to the identification of physiological models of von Willebrand disease from clinical data will be presented where techniques and methods for testing the identifiability of PK models have been used.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444639646000040

Intelligent Control

A. Meystel , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

I.A Conventional Control

The first step in conventional control theory is to define a distinguishing characteristic of a particular system. The next step is to formulate that system mathematically using differential and integral calculi, differential equations, linear algebra, and vector analysis.

Definition

A system Σ, Δ   Σ is a mathematical structure ( T, X, U, Ω, Y, Ɣ, ϕ, η) defined by the following axioms.

Axiom 1.

Existence. There exists a given time set T, a state set X, a set of input values U, a nonempty set of acceptable input functions (or command sequence, or control)

Ω = ω : T U

a set of output values Y, and a set of output functions

Γ = { γ : T Y }

Axiom 2.

Direction of time.T is an ordered subset of the reals.

Axiom 3.

Organization. There exists a state-transition function (or trajectory of motion, or solution curve)

ϕ : T × T × X × Ω X

whose value is the state t  T resulting at time t  T from the initial state (or event) x  = x(τ) at the initial time τ   T under the action of the input ω     Ω. This function has the following properties:
(a)

Direction of time. Function ϕ is defined for all t    τ, but not necessarily for all t  <   τ.

(b)

Consistency. Function ϕ (t; τ, x, ω)   = x for all t  T, all x  X, and all ω     Ω.

(c)

Nested concatenation, or composition. For any t 1  < t 2  < t 3 the following holds

ϕ t 3 ; t 1 , x , ω = ϕ t 3 ; t 2 , ϕ t 2 ; t 1 , x , ω , ω

for all x  X and for all ω     Ω.
(d)

Causality. If ω1, ω2    Ω and ω1  =   ω2 then

ϕ t ; τ , x , ω 1 = ϕ t ; τ , x , ω 2 .

Axiom 4.

Transfer function mapping. There exists a map η: T  × X  Y, which defines the output y(t)   =   η (t, x(t)).

Several generations of control scientists confined themselves to using this rigid model of reality. One can see immediately that a very strict structure is required from the beginning such that the sets of input, output, and states are clearly separated and given in advance. No negotiation is presumed as a part of the future control operation, and no overlapping among the sets is presumed. The system contains no goal of operation as a part of the structure. However, this might not be required at this stage: it does have the set for control, but we have not yet discussed how this set will be applied.

In reality, we are interested in defining the state-transition function not only after but also before the definite moment in time (t    τ). Otherwise the problems and prediction are difficult to solve, which are the most important problems in the domain of contemporary real systems to be controlled.

Axioms 1–4 imply that the time scale of the system should be preselected. Another way of dealing with the system model can be introduced that does not require the time scale of the system to be displayed. Interestingly enough, there is a tacit presumption that this concatenation [see property (c)] should be unique, which is an extremely rigid requirement. It actually excludes from consideration all redundant systems where a multiplicity of concatenations can be found.

When Axioms 1–3 are applied to a realistic physical system, the stepwise numerical representations of any particular measurement system become apparent. This is due to the fact that for any given scale, a limit of distinguishability exists such that the interval of each coordinate is given as a limit of accuracy of measurements and computations. Every system with a state-transition function as previously defined and with the norm ∥ω∥   =   sup∥u(t)∥ has a transition function in a form as follows:

dx / d t = f t , x , π t ω ,

where operator π t is a mapping and Ω   U is derived from ω   u(t)   =   ω(t). Rejecting π t : ω     (u(t), u′(t),     , u (n)(t)) further narrows down the domain of systems under consideration. In the smooth, linear, finite-dimension case, the transition function obeys the simplified relations. The simplification is determined by selecting norms of the corresponding spaces with no derivatives of the time functions for controls. Only now do we realize that this expectation might be quite right; we are dealing now with the systems that need a norm based on all sets of u(t), u′(t),     , u (n)(t) as follows:

dx / d t = F t x + G t u t , y = H t x t ,

where F(t) and G(t) are parts of the expression f(t, u, u(t))   = F(t)x  + G(t)u(t), H(t) is a mapping of T    {p× n matrices}, which is obtained from the following:

y t = η t , x t = H t x t ,

where T  = R 1 and X and U are normed spaces, F(t) is a mapping of F: T    {n  × n matrices}, G(t) is a mapping of G: T    {n  × m matrices}, n is a dimensionality of states x  R n , m is a dimensionality of controls u  R m , and p is a dimensionality of outputs y  R p . Proper adjustments and modifications are being made for a variety of cases: discrete systems, systems with nonvarying parameters, and so on.

The control law in conventional control is defined as mapping k: T  × X  U, which puts in correspondence x(t) and u(t) for each moment of time. So, in this formulation we do not define how the control law corresponds to any of the control requirements. The standard problem of control is defined as follows: For each event from the set of initial events (t 0, x 0) the control should be determined u(·), which transforms this initial event into the goal set and minimizes the cost functional simultaneously.

Another definition of a control problem has a clear reference to conventional control theory. The control problem is recommended to be divided into the following steps: (1) establishing of a set of performance; (2) writing down the performance specifications; (3) formulating a model of the system in a form of a set of differential equations; (4) using conventional control theory find the performance of the original system, and if it does not satisfy the list of requirements, then cascade or feedback compensation must be added to improve the response; and (5) using modern control theory approach assign the entire eigenstructure, or the necessary structure is to be designed to minimize the specified performance index (which is understood as a quadratic performance index). In general, one can easily find that there is a surprising lack of uniformity in the existing views on control problems.

The following definition of control law can be considered consistent with the practice of control: Control of a process implies driving the process to effectively attain a prespecified goal. One can see that the notion of goal is included in this definition.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105003483

Advances in Imaging and Electron Physics

Zofia Barańczuk , ... Peter Zolliker , in Advances in Imaging and Electron Physics, 2010

2.3 RGB and XYZ Color Spaces

Grassmann (1853) observed that empirical color matchings satisfy a mathematical structure he introduced himself some years earlier ( Grassmann, 1844). This structure is today known as a vector space. In case of color we observe a three-dimensional (3D) vector space; accordingly, colors can be specified as tristimulus values. There is a strong correlation between the mathematical structure and the underlying physics of light stimuli. A vector can be understood as light source, its length as intensity, and the addition of vectors as the physical mixture of the corresponding light sources. For fixed viewing conditions, this approach was carefully tested and documented by Wright (1928-29) and Guild (1931), respectively. This resulted in the introduction of the standardized color spaces CIERGB and CIEXYZ (see CIE, 1932, for short RGB and XYZ).

The primary colors—red, green, and blue—are defined as colors of monochromatic light at wavelengths of 435.1nm (B), 546.1nm (G), and 700 nm, (R). The tristimulus values of monochromatic light at a given wavelength λ have been determined and documented as the color-matching functions r (λ), g (λ), and b (λ). These functions allow the calculation of the tristimulus value R, G, and B of an arbitrary light stimulus with given spectral power distribution Φ(λ) as

(1.1) R = k Φ ( λ ) r ( λ ) d λ , G = k Φ ( λ ) g ( λ ) d λ , B = k Φ ( λ ) b ( λ ) d λ ,

where k means a normalizing constant.

More popular than RGB is XYZ, which is mathematically derived from RGB by a change of the vector space base (i.e., by a base transformation matrix):

(1.2) ( x ( λ ) y ( λ ) z ( λ ) ) = 5.6508 ( 0.490 0.310 0.200 0.177 0.812 0.011 0.000 0.010 0.990 ) ( r ( λ ) g ( λ ) b ( λ ) ) .

The choice of the XYZ base seems arbitrary but optimizes some technical constraints, for instance, the Y coordinate is identical to the CIE spectral luminance efficiency function also known as photopic luminance efficiency function V(λ).

Many other well-known RGB color spaces, such as sRGB, Adobe 98-RGB, and ECI-RGB, are also derived from RGB and should be understood as mathematically equivalent. Contrary to these color spaces, CMYK typically denotes a device-dependent specification describing the amount of ink (cyan, magenta, yellow, and black) placed in a raster cell. Because of the subtractive interaction of C, M, Y, and K inside the raster cell, a modeling of the resulting color is much more complex.

The CIEXYZ system represents the average ability of humans to discriminate colors under specific viewing conditions; hence it is sometimes also called standard or normal observer.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S1076567010600018

Advances in Imaging and Electron Physics

Johan Debayle , Jean-Charles Pinoli , in Advances in Imaging and Electron Physics, 2011

2.3 Importance of the Ordered Sets Theory

Nevertheless, a vector space representing a GLIP framework is a too-poor mathematical structure. Indeed, it only affords a description of how images are combined and amplified. In addition to abstract algebra, it is then also necessary to resort to other mathematical fields, such as topology and functional analysis ( Pinoli, 1997b).

In particular, the ordered sets theory (Kantorovitch & Akilov, 1981; Luxemburg & Zaanen, 1971) offers powerful and useful notions for image-processing. Indeed, from an image-processing viewpoint, since images consist of positively valued signals, the positivity notion is thus of fundamental importance. An ordered vector space S is a vector space structured by its vectorial operations

,
, and
and an order relation, denoted ≽, which obeys the reflexive, antisymmetric, and transitive laws (Kantorovitch & Akilov, 1981; Luxemburg & Zaanen, 1971).

Any vector s of S can then be expressed as

(2.1)

where

and

are called the positive part and negative part of s, respectively.

The positive and negative parts of s are respectively defined as

(2.2)

(2.3)

where max ( . , . ) denotes the maximum in the sense of the order relation ≽, and

is the zero vector (i.e., the neutral element for the vector addition

). From this point, the modulus of a vector s, denoted

, is defined as ∀s ∈ (S,

,

, ≽)

(2.4)

Note that the positive part, negative part, and modulus of a vector s belonging to an ordered vector space S are positive elements, namely;

(2.5)

(2.6)

(2.7)

The ordered sets theory has played a fundamental role within some GLIP approaches and has allowed mathematically justified powerful image-processing techniques to be developed (Pinoli, 1987, 1992, 1997a).

From this standpoint, a GLIP framework can be represented by an ordered vector space structure.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012385985300002X