Discriminant Structures Associated to Matrix Semantics

. In this paper we show a method to characterize logical matrices by means of a special kind of structures, called here discriminant structures for this purpose. Its deﬁnition is based on the discrimination of each truth-value of a given (ﬁnite) matrix M = ( A , D ), w.r.t. its belonging to D . From this starting point, we deﬁne a whole class S M of discriminant structures. This class is characterized by a set of Boolean equations, as it is shown here. In addition, several technical results are presented, and it is emphasized the relation of the Discriminant Structures Semantics (D.S.S) with other related semantics such as Dyadic or Twist-Structure.


Preliminaries
An interesting point of view against semantics based on many truth-values was established by R. Suszko in [13].According to his opinion, it would be more faithful, under a semantic perspective, to consider just two truth-values and to obtain some new kind of interpretation from the involved formulas to this "new" values (by the way, in this case the interpretations suggested do not need to be homomorphisms).Thus, the truth-values of a given matrix are merely reference values (which should not be confused with truth-ones).
Following this idea, several researchers have intended to "pass from" manyvalued matrices to other structures that have as basis just two truth-values.A frequent underlying idea that guides some studies about these topics is the following: the truth-values of a matrix M = (A, D) can be often separated (or discriminated), according to its belonging to D, the set of distinguished truthvalues of M .Moreover, this motivation was already applied in the literature (see [12], [1] or [2], for instance).
Based on the previous motivations, in this work we present a kind of representation of many-valued matrices, in such a way that every involved truthvalue can be "codified" by tuples of 0 s or 1 s.So, for every matrix M = (A, D) admitting this codification, the support of its underlying algebra A can be understood as a subset of 2 t , for a suitable t.In addition, this characterization of M induces a whole class S M of structures (which will be called "discriminant structures", here).Every member of S M can be considered as a C-matrix, indeed.But the key point along this work is that every discriminant structure of S M is defined by means of certain Boolean equations, which are determined by the original representation of M .Moreover, the class S M defines in a natural way the consequence relation |= S M (which, in an informal way, will be frequently mentioned as the "Discriminant Structure Semantics", or D.S.S., associated to M ).Moreover, for every formula α, |= S M α iff |= M α, as we shall demonstrate later.
Summarizing, the representation of a given matrix M = (A, D) showed here induces a new kind of semantics that can be considered a sort of "middle point" between the matrix semantics and the algebraic one (in the sense that |= S M consists of a class of matrices characterized by equations) that, in addition, is weakly adequate w.r.t.|= M .Along this paper we will show these results and another ones, that motivate and provide a reasonable initial analyisis of the scope of D.S.S.With this in mind, the organization of this paper is as follows: in the sequel we will fix the definitions and notation to be used along this article.Section 2 will explain, in informal terms for the moment, the way that D.S.S. behaves, by means of some clarifying examples.All this discussion will be extended (in a more technical mode) in Section 3. Indeed, it is in this section where D.S.S. will be presented in a formal way, together with the proof of the main technical results.In Section 4 the following interesting fact will be established: not every finite matrix admits Discriminant Structure Semantics.This will suggest some lines of research that we shall discuss later.Finally, in the last section we will compare our proposed semantics to other ones, related with the approach of D.S.S. in some sense, as Dyadic Semantics (already indicated) and Twist-Structure Semantics.We will conclude this paper with some comments about possible future work.
With respect to the basic notation and definitions that will be used in this paper, take into account that we are studying different ways of definition of a same consequence relation.So, we choose to use the traditional formalism of Abstract Logic, applied mainly to the particular case of consequence relations defined by matrices.For that, we are based mainly on the point of view developed in [3], with some little notational changes, when necessary.Definition 1.1.We denote by ω = {0, 1, 2...} the set of natural numbers; a signature is a set C= {c i } i∈I together with a function ρ : C → ω.Here, every element c ∈ C will be called a connective of C, being ρ(c) the arity of c.Given a signature C, the sentential language determined by C is the absolutely free algebra generated by C over a countable fixed set V (this set will be called the set of atomic formulas of L(C)).
Every language L(C) can be understood as a particular, paradigmatic case of a C-algebra, whose formal definition is as follows: Definition 1.2.Given a signature C, a C-algebra is a pair A = (A, C A ), such that every connective c of arity ρ(c) has associated an operation c A ∈ C A , with the same arity of c.The set A will be called the support of A1 .Note that a C-algebra is, actually, any algebra similar to L(C):= (L(C), C) (in this last case C L(C) is being identified with C, indeed).
Along this paper we will work with several signatures.However, a special signature (the Boolean one) deserves special attention: Definition 1.3.The Boolean signature is the set C = {∨, ∧, −, 1, 0}, with obvious arities.Every C-algebra B = (B, C B ) will be called a Boolean algebra if and only if it is, in addition, a bounded, distributive, complemented lattice.In this context, it can be defined the "secondary operation" → as usual: a 1 → a 2 := −a 1 ∨ a 2 .As a particular case of a Boolean algebra, it will be very used in this paper the canonical two-element algebra 2 = (2, C 2 ), with 2 = {0, 1}.By the way, every function f : 2 r → 2, with r ∈ ω, will be called simply a 2-Boolean function.
Recall here this reformulation of the Conjunctive Normal Form Theorem, which will be used later: Proposition 1.4.Every 2-Boolean function f : 2 n → 2 can be identified with a function f : 2 n → 2 (the conjunctive normal form of f , indeed) in such a way that f is defined using the functions of C 2 .
At this point, note that any signature C plays (at least) two roles: on one hand, it determines sentential languages; on the other hand it defines Calgebras.Of course, these notions are strongly related.For instance, C-algebras are the basis of sentential logics that are defined by means of logical matrices.We recall both definitions in the sequel.Every C-matrix defines a consequence relation for L(C), as usual: We say that ϕ is tautology (relatively to M ) iff ∅ |= M ϕ (denoting this as |= M ϕ).The logic induced by M is the pair L = (C, |= M ).If the domain of a C-matrix M is finite we will say that M is a n-valued matrix and, by extension, that L = (C, |= M ) is a n-valued logic.This definition can be generalized to classes: if K is a class of C-matrices, the consequence relation |= K is given by: Γ |= K ϕ iff Γ |= M ϕ for every M in K.
Turning back to C-matrices, we will also use the following notions: If, in addition, h is surjective, we will say that it is an matrix epimorphism.On the other hand, h : A 1 → A 2 is a matrix isomorphism (between M 1 and M 2 ) iff it is an isomorphism (in the algebraic sense) verifying additionally that h(D 1 ) = D 2 and h(A We conclude this section with some comments about notation: the metavariables referred to formulas will be denoted by greek lowercase letters (with subscripts, if neccesary).In particular, we will use the letters α, α 0 , α 1 , . . ., only in the denotation of atomic formulas.The expression β = β(α 1 , . . ., α n ) means that the atomic formulas of β belong to the set {α 1 , . . ., α n }.If β = β(α 1 , . . ., α n ), the expression β(α 1 /γ 1 , . . ., α n /γ n ) denotes the uniform substitution, in β, of the atomic formulas α i by the formulas γ i (with 1 ≤ i ≤ n).If there is no risk of confusion, this expression will be abbrevied to β(γ 1 , . . ., γ n ).With respect to notation related to algebras: the elements of any support A will be denoted by the letters {a 0 , a 1 , a 2 , . . .} (usually as reference to "specific elements of A"), or by the letters {x 0 , . . ., x n , . . .y 0 , y 1 , . . .} (as an informal notation for variables ranging on A).In addition, the symbols x and a denote tuples (for instance, a:=(a 1 , . . ., a t ) ∈ A t ).In this context, the symbol π i denotes the i th projection of any tuple.Besides that, the symbol ∼ = will denote isomorphism (between algebras or between matrices, depending on the context).Any other definition or notation (or, even, notational abuse or convention) to be used in this paper will be indicated when needed.

Discriminant Structure Semantics: some motivating examples
We begin our definition of Discriminant Structure Semantics with several examples that will help us to understand some motivations behind the formal definition of D.S.S.By the way, some of these examples will be used with other purposes later.In all these examples the signature to be used will be the same: C = {⊃, ¬}, with obvious arities.
First of all, we give the definition of a Discriminant Structure Semantics for the logic I 1 P 0 (better known simply as the "weakly -intuitionistic logic I 1 "), defined in [11].
For a better understanding of M [1,0] , the truth-values F 0 and T 0 are classical truth and falsehood, respectively; on the other hand, F 1 is an "intermediate value of falsehood".
We will characterize the tautologies of I 1 P 0 using discriminant structures, whose definition will be given in the sequel.This characterization (with some little changes) was shown in [8].For our purposes we are based on Boolean algebras of the form B = (B, C B ) (recall Definition 1.3): Definition 2.2.The discriminant structure (of type [1,0]) associated to a given Boolean algebra B (the d.s. of type [1,0] (2) (x 0 , x 1 ) ⊃ (y 0 , y 1 ) := (x 0 → B y 0 , − B (x 0 → B y 0 )).
In addition, the set of designated values of R Of course, ∧ B , − B and → B are the operations, with obvious behavior, defined in the context of every Boolean algebra B.
The class of all the discriminant structures of type [1,0] will be denoted by S [1,0] .This class will be called the Discriminant Structure Semantics for I 1 P 0 (The D.S.S. for I 1 P 0 , for short).Considering this definition, we define |= S [1,0] ⊆ ℘(L(C ))×L(C ) as being the matrix consequence relation indicated in Definition 1.7 (for classes of matrices).So, Γ |= S [1,0] α iff, for every d.s.
About the previous definition it can be easily proved that: Proposition 2.3.The operations ⊃ and ¬ are well defined.That is, every set A [1,0] (B) is closed by applications of ⊃ and ¬.
(b) For every prime filter ∇ of B, the binary relation ≡ ∇ defined by: x ≡ ∇ y iff {x → B y, y → B x} ⊆ ∇ is a congruence, and the quotient B/∇ (whose support is B/∇ = {∇, ∆}) is isomorphic to the Boolean algebra 2, being ∆:=B − ∇ the prime ideal associated to ∇.
From Propositions 2.5(a) and 2.6(b) we have: Corollary 2.7.For every Boolean algebra B and every prime filter , where the operations in the algebra A [1,0] (B/∇) are given as in Definition 2.4 (replacing 1 by ∇ and 0 by ∆).
Proposition 2.8 (Trichotomy).Let B be any Boolean algebra and let ∇ be any prime filter of B. Then, for every pair (x 0 , x 1 ) ∈ A [1,0] (B), one and only one of the following conditions is valid: Proof.It follows from basic properties of prime filters and Definition 2.2.
Proposition 2.9.Every prime filter ∇ of a Boolean algebra B induces a matrix epimorphism Proposition 2.8).This also implies that E ∇ preserves the designated values.Finally, it is not difficult to prove that E ∇ is homomorphism.That is: So, from (A) and (B), the proof is completed.

Revista Colombiana de Matemáticas
Proof.On one hand, since Conversely, suppose that there exists a Boolean algebra B and a . So, there is a prime filter ∇ ⊆ B with w 0 (ϕ) / ∈ ∇, by Proposition 2.6(a).From this and Proposition 2.9, there exists an epimorphism . This concludes the proof.
A generalization of I 1 P 0 , and simultaneously of the paraconsistent logic I 0 P 1 (which is traditionally known as P 1 , see [10]) is the logic Definition 2.11.The logic In addition, the truth-functions ¬ and ⊃ here behave as follows: It can be obtained a Discriminant Structure Semantics for I 2 P 1 following a procedure similar to the one applied for I 1 P 0 .Definition 2.12.For every Boolean algebra B, the d.s.(of type [2,1]) associated to B is the Volumen 52, Número 2, Año 2018 We define, as in Definition 2.2, the class S [2,1] of all the discriminant structures of type (2, 1), which will be called the D.S.S. for I 2 P 1 .This class defines |= S [2,1] , in a similar way to Definition 2.2.Moreover, it is possible to prove as in Theorem 2.10: The proof of the theorem above is based, again, on the definition of a canon- (2) = {(0, 1, 0); (0, 0, 1); (0, 0, 0); (1, 0, 1); (1, 1, 0)}.The isomorphism f between both matrices is defined by: Remark 2.14.At this point it is reasonable to know which are the hidden reasons that motivate, in Definitions 2.2 and 2.12, this kind of structures.As a first approximation, let us pay attention to the operation ¬ in the matrix M [1,0] (see Definition 2.2): if we consider that D [1,0] = {T 0 } we can see that, when applied to every truth-value of M [1,0] , ¬ verifies: This fact can be informally "codified" (interpreting "1" as "belonging to D [1,0] ", and "0" otherwise), as T 0 → 1, ¬T 0 → 0. So, the value T 0 is associated to the pair (1, 0).With the same approach, we can associate F 0 with the pair (0, 1), and F 1 with the pair (0, 0) (since ).This interpretation allows to define the support set A [0,1] (2), which will suggest the definition of every support of the form A [0,1] (B), as we will see in the next section.
In the case of Definition 2.12, the truth-funcion ¬ cannot discriminate the truth-values F 1 and F 2 (since nor ¬F 1 nor ¬F 2 belongs to D [2,1] ).Anyway, if ¬ is applied in an iterated way, we obtain the following results: In other words, the iteration of the application of ¬ suggests the definition of the isomorphism showed right after Theorem 2.13, determining also in this case the support of the canonical structure R [2,1] (2).
The previous examples show that, to obtain the "canonical discriminant structure", which is a special matrix, isomorphic to M [1,0] (resp.M [2,1] ), we will use the truth-function ¬, eventually iterated, that discriminate the truth values of the matrix analized.However, this procedure cannot be applied in the case of logics without negation, or even without any 1-ary truth-function.So, it is necessary to adapt the previous idea to these logics and, in a more general way, to every logic characterized by finite matrices.This will be developed in the next section.

D.S.S: Abstract Definition and Some Results
Based on the construction for the D.S.S. for M [1,0] and for M [2,1] , let us try to explain in an informal way the process that will allow us to find a D.S.S of a form S M for a given C-matrix logic M = (A, D).For that, according with the ideas of the previous section, we would consider as a starting point the map χ D : A−→2, the characteristic function referred to D (w.r.t. the universe A).Besides that, to deal with iterations of functions, we will abbreviate the composition of truth-functions With these conventions, the basis of a D.S.S. S M is the existence of a discriminant pair for M , which is an adequate generalization of the truthfunction ¬ of the previous examples, as we shall see. 2 , is discriminant (by iterations).That is, there exists k ∈ ω such that the function [χ D ] k : A → 2 k+1 is injective where, for every x ∈ A.
Remark 3.2.It should be clear that, if β = β(α 0 ), then we consider that a is not effectively used.This is the case of the discriminant pairs for M [1,0] and M [2,1] : for both logics we consider β = β(α 0 ):= ¬(α 0 ).So, f (β, a) (x) = ¬x, as it was suggested in Remark 2.14, actually.Indeed, note that the function f showed there is precisely f (β, a) and verifies, for both logics, the "discrimination conditions" required in Definition 3.1.The only difference between the cases of M [1,0] and M [2,1] is the number of iterations that is needed.Later in this section we will see an example of a discriminant pair (β, a) such that a is applied in an effective way.
It is worth to anticipate here the following result: if a C-matrix M = (A, D) admits a discriminant pair (β, a), then it allows to obtain, in an implicit way: • The form of the support set A M (2) of a certain "canonical discriminant structure", which is a particular C-matrix that will be denoted by R M (2).
• In addition, (β, a) determines the behavior of the truth-functions in the C-algebra A M (2) (which is the underlying algebra referred to R M (2)).
• Finally, (β, a) suggests the definition of all the discriminant structures (relative to M ) that will conform a certain class S M (and a consequence relation |= S M ), which is understood as the D.S.S. associated to M .Moreover, |= M ϕ iff |= S M ϕ, for every ϕ ∈ L(C).
To begin our proof of the results indicated above, it is necessary to deal in a formal way with the languages referred to algebras (and, in particular, to Boolean algebras, since they are the basis of the discriminant structures, as we have seen).So, recall the notion of the (first order) Boolean equational language: Definition 3.3.The (First Order) Boolean Equational language is the first-order language with only a binary predicate symbol "=", having as set of function symbols the own set C = {∨, ∧, −, 1, 0} (recall Definition 1.3).In this language, the variable symbols will be indicated by x 0 , x 1 , . . . .In addition, the set of logical symbols (that not should be identified with any symbol of C) is { , , ⇒, ¬}.This language will be indicated as FOBL.Every atomic formula of this language will be called a Boolean equation (by the way, the set of all the Boolean equations will be denoted by Eq C ).Finally, an expression eq 1 eq 2 • • • eq n ⇒ eq 0 , being eq 0 , . . ., eq n Boolean equations, is called a Boolean quasi-equation.
Remark 3.4.The previous definitions are motivated by the fact that, by wellknown results of Universal Algebra, every Boolean equation (quasi-equation) is valid in every Boolean algebra B if and only if it is valid in 2. This fact will be widely used in the next proofs, as we shall see.Besides that, it is shown in Definition 3.3 another use of the signatures: C is applied in the context of FOBL to the definition of the function symbols of this language (and this is the reason that motivates the differentiation of C w.r.t the logical symbols).
In addition, note the following notational abuse in the definition of FOBL: we are using the same notation for the informal reference to Boolean algebras, as it has been done until now, and for the Boolean Equational Language.So, the "informal" notation x 1 , x 2 , . . .referred to variable elements of Boolean algebras will be used too, in an formal way, to denote symbols of variables of FOBL.This convention is applied also to the interpretation of the symbol "=".On the other hand (and considering the previous convention), recall that every term τ =τ (x 1 , . . ., x n ) of FOBL determines in every Boolean algebra B an n-ary function τ B : B n → B in the usual way.
With respect to the relations between terms of FOBL and Boolean functions, we remark this fact arisen from Definition 1.3 and Proposition 1.4: Definition 3.5.For every Boolean algebra B, the function f : B n → B is Cdefinable iff f can be expressed by an (iterathed) application of the functions of C B .Proposition 3.6.For every Boolean algebra B and every C-definable function f : B r → B, there exists a term τ f of FOBL such that τ B f = f .Moreover, by Proposition 1.4, every 2-Boolean function f : 2 r → 2 determines a C-definable function f (the c.n.f of f )3 and, therefore, a term τ f =τ f (x 1 , . . ., x n ) of FOBL such that τ 2 f = f .Proposition 3.7.Let r ∈ ω, and let S ⊆ 2 r .Then S can be characterized (relatively to 2 r ) by a Boolean equation eq S .That is, S = { x ∈ 2 r : x satisfies eq S }.
Proof.Just consider the characteristic function χ S : 2 r −→2: since χ S is a 2-Boolean function, it can be identified with f S , the c.n.f. of χ S , which determines the term τ f S of FOBL in such a way that τ 2 From this, it easily follows that A ∼ = A, and thus M ∼ = M .Remark 3.9.Note here the following fact about A : for every n-ary c ∈ C, the truth-function c A : (A ) n −→ A can be "explained" componentwise.In other words, for every 0 ≤ i ≤ k, there exists a truth-function f c i : 2 n(k+1) −→2 defined as f c i ( x 1 , . . ., x n ):= π i (c A ( x 1 , . . ., x n )) (where x 1 , . . ., x n ∈ 2 k+1 ).In addition, every truth-function f c i can be identified with its c.n.f.f c i : 2 n(k+1) −→2.This implies two facts: every function , because Proposition 3.6.On the other hand, the operations on A can be defined now in this alternative way: for every n-ary c ∈ C, for every where f c i : (A ) n(k+1) −→A are the respective c.n.f. of f c i , for every 0 ≤ i ≤ k.This fact suggests the following definition: Definition 3.10.Let M = (A, D) be a C-matrix admitting a discriminant pair, and let eq A and eq D the equations that characterize [χ D ] k (A) (resp.[χ D ] k (D)), cf.Proposition 3.7.For every Boolean algebra B, the discriminant M -structure associated to B is the C-matrix R M (B) =(A M (B), D M (B)), with D M (B):={ x ∈ B k+1 : x satisfies eq D }, and being A M (B) the C-algebra whose support is A M (B) := { x ∈ B k+1 : x satisfies eq A } 4 .In addition, for any n-ary c ∈ C, the (taking into account Remarks 3.9 and 3.4).In particular, the discriminant structure R M (2) = (A M (2), D M (2)) will be called the canonical discriminant M -structure.The class of all discriminant M -structures will be denoted by S M , and it will be called the D.S.S. associated to M .
The following result establishes that the previous definition makes sense: Proposition 3.11.For every C-matrix M and every Boolean algebra B, the set A M (B) is closed by the truth-functions c A M (B) .That is, A M (B) is well defined as an algebra.
Proof.First of all note that our claim is valid for the particular case of the Boolean algebra 2: indeed, its proof is given in an implicit way in Proposition 3.8.This fact can be interpreted in the following way, cf.Definition 3.10: for every n-ary connective c, the algebra 2 satisfies simultaneously the set of quasi-equations {qec i ( x 1 , . . ., x n )} 1≤0≤k given by (for every 0 ≤ i ≤ k): (where τ f c i is given cf.Remark 3.9).Thus, every Boolean algebra B satisfies, for every n-ary c ∈ C, the set of quasi-equations {qec i }, too.That is, A M (B) is closed by applications of c A M (B) , for every c ∈ C.
Another (abstract) result, useful for the proof of the fundamental theorem of this section, is similar to the one given in Proposition 2.5: Proposition 3.12.The discriminant structures of S M verify: In the sequel we will prove the fundamental result of our paper, as we have mentioned right after Remark 3.2.That is, |= M ϕ iff |= S M ϕ, for every ϕ ∈ L(C).For that, we will need some technical results.First of all, realize that every set of the form D M (B) can be characterized in a simpler way: Note that this fact can be reformulated in this way: 2 verifies the quasiequations ( * ) eq A ( x) eq D ( x) ⇒ (x 0 = 1) and ( * * ) eq A ( x) (x 0 = 1) ⇒ eq D ( x).
Thus, every Boolean algebra B verifies ( * ) and ( * * ), too.In addition, the equations eq A and eq D characterize the sets A M (B) and D M (B), resp., cf.Definition 3.10.From this, our claim can be proved for every d.s.R M (B).
In the proof of the fundamental result of our paper we will use Proposition 2.6 again and, in particular, the canonical homomorphism e ∇ : B → B/∇.By the way, an essential result relating in a general way homomorphisms of Boolean Algebras to prime filters is based on Definition 3.5.That result can be proved by induction on the complexity of the terms of FOBL: . As a particular case, for every C-definable function f : , as it was previously indicated).
To illustrate the technical results proved here we will give another kind of example of D.S.S., that will allow us to understand the generalization here proposed by means of the definition of a discriminant pair (β, a): Example 3.20.Let L be the logic defined on the basis of the well-known Lukasiewicz three-valued matrix L 3 (being its underlying signature C again), but considering the intermediate truth-value as the only designated one.Formally speaking, L = (C , |= M ), where |= M is defined by the C -matrix M = (A , D ), with A = {0 , 1  2 , 1 } and D = { 1 2 }5 .In A , the truth-functions ⊃ and ¬ are defined as: Of course, even when the truth-values and the operations are the same as in the matrix that defines L 3 , the change in the set of designated values produces different tautologies in both logics.In fact, L has not tautologies at all.Proposition 3.21.For every ϕ ∈ L(C ), |= M ϕ.
Proof.Note that {0 , 1 } can be viewed as (the support of) a subalgebra of A .Now, for every ϕ = ϕ(α 1 , . . ., α n ) ∈ L(C ) consider the M valuation cases.Suppose now that the result is valid for every t ≤ n.Consider now β = β(α 0 , α 1 , . . ., α m ) and a = (a 1 , . . ., a m ) ∈ A m , where β has n ocurrences of * .Then β = β 1 * β 2 .If β 1 , β 2 ∈ V we would return to n = 1.So, suppose (without loss of generality), that β 1 / ∈ V.By Induction Hypothesis, there are In a similar way, f (β, a) (a 1 ) = V .Now, If β 1 ∈ V, then β 2 / ∈ V.For this case, adapt the previous reasoning.Proof.Note that Proposition 4.2 implies that there are not pairs (β, a) that can discriminate all the values of A by means of one iteration.But noting that V is an absorbent element in M U rq we have that, for every pair (β, a), for every k ∈ ω, there are a 0 , a 1 ∈ A, a 0 = a 1 , such that [χ D ] k (a 0 ) = [χ D ] k (a 1 ) = (0, 0, . . ., 0) (k + 1 times), and so they cannot be discriminated.This last result shows that, despite the simplicity of the process to obtain a D.S.S., the basis of it (that is, the existence of a discriminant pair) is not trivial.
An additional problem here is referred to the uniqueness of a discriminant pair.It is easy to see here that, if a matrix M admits a discriminant pair (β, a), such a pair not needs to be unique.For instance, consider again the matrix M of Example 3.20: an alternative discriminant pair (different to the pair showed in Proposition 3.22) is (β , a ), with β (α 0 , α 1 ):=α 0 ⊃ α 1 , and a =a 1 := 1  2 .For this pair, we associate 0 with (0, 0), 1  2 with (1, 0), and 1 with (0, 1).We will return to this point at the end of the paper.

Relations with Dyadic and Twist-Structure semantics
As it was previously commented, Discriminant Structures Semantics are motivated by a simple idea, usual in the field of many-valued logics, which can be stated as follows: the truth-values of a matrix M = (A, D) can be often discriminated, according to their belonging to D. As it was mentioned, this approach was already applied, with several purposes.For example, the separation (i.e.discrimination) of truth-values can determine whether certain formulas are synonymous or not (see [12], for example).Also, several definitions of twovalued (non truth-functional) semantics make use of this notion.One example of such construction is Dyadic Semantics (see [1]), which possess certain similarities with D. S. S., according to our point of view.So, we will discuss briefly here the relationship between both semantics.For that, the notation of [1] was modified, for a better comparison to the present paper.
Roughly speaking, a dyadic semantics for a given logic L = (C, |= M ) induced by a finite C-matrix M = (A, D) is built on a basis of: • A set of formulas {φ i } 1≤i≤k , where (for every i), φ i =φ i (α) ∈ L(C) (with α ∈ V).
• A function h : A−→2 such that, for every φ i , for every a ∈ A, h(φ i (a)) = 1 iff φ i (a) ∈ D.
In addition, the truth-functions associated to the set {φ i } 0≤i≤k (together with h) separates, actually, the values of A, by means of (k + 1)-tuples of elements of 2, and therefore the family {φ i } 0≤i≤k can be understood as a generalization of the formula β in our discriminant pair.The scope of Dyadic Semantics is, considering this aspect, stronger of the D.D.S, wherein just only formula β (eventually iterated) is allowed.However it should be noted, from the explanation above, that the separation method based on Dyadic Semantics depends on the existence of onevariable formulas of L(C) (which determine truth-functions as usual).On the other hand, in the case of D.S.S., the discriminant pair (β, a) induces the (one variable-depending) truth-functions f (β, a) , which do not need to be "associated" to any particular formula in L(C).This suggests that, meanwhile Dyadic Semantics is focused more strictly on the involved languages, D.S.S analyze mainly the used algebras Note, anyway, that if the algebra A of a given Cmatrix M = (A, D) is functionally complete, then every A-truth-function has an associated formula that describes it (by means of the connectives of C), and so it would be possible to "jump" from the matrices to the formal languages themselves.
Besides that, an "hybrid method" of separation of truth-values can be done.Consider simply that every truth-value of a matrix M = (A, D) is tested by a set f (βi, ai) of discriminant (non-iterated) pairs.An informal example of this idea can be developed for the logic U rq, previously defined.Proof.The following schema shows the formulas which discriminate every truth-value of U rq, and the indentification of each truth-value by means of the function h 4 : A−→2 5 , defined by h 4 (x) =(χ D (f i (x))) 0≤i≤k , for every x ∈ A: So, all the truth-values of U rq can be discriminated by the set f (βi, ai) .
It must be indicated that the method that "explains" the behavior of the operation * , when applied to the founded t-uples, was not developed here.Moreover, it is not clear here (as in Dyadic Semantics in general terms) in which way an algebraic treatment can be done.Indeed, Dyadic Semantics is not focused (until this moment) on algebraic considerations that allow to extend a certain "canonical dyadic semantics" to a whole class K, as in the case of D.S.S.Of course, all these topics deserve a deeper treatment in the future.
We conclude this paper mentioning certain connections with a non-standard semantics that is a kind of "hidden motivation" for the definition of the Discriminant Structures, which is Twist-Structure Semantics.Actually, the preliminar version of this paper ( [5]) remarks, in an strong way, the relations between these semantics.To show some of them we proceed as above, giving in a very informal way some key-concepts on which Twist-Structure Semantics is based.
First of all, usually a Twist-Structure Semantics for a given logic L = (C, |=) (which does not need to be defined by any C-matrix), is a class K of C-algebras such that, for every H in K, it holds: • H is a subalgebra of the product T × T * , where T is a certain ordered C-algebra, and T * is the dual algebra of T. Indeed, the "torsion" of the second axis in T × T * is the fact that suggests the name twist-structure, for every H.
So, in this informal definition, H plays the same role that the algebras A M (B) in D.S.S.In addition, the "basis algebra T" acts as the basic Boolean Algebra B that determines every d.s.R M (B).
• As in the case of D.S.S., The operations on every H are given taking into account the behavior of the "original C A -operations".
• Once the class K is defined, it determines a certain consequence relation |= K .Now, to prove adequacity (that is, |= K ϕ iff |= ϕ) is usual to prove "representation results", mainly when already exists a previously given class A which determines |= .That is, that every algebra A of A can be represented by a Twist-Structure H of K, and vice versa.Moreover, this results allows to prove strong adequacity, actually8 .
It is usually accepted that the constructions known as Twist-Structure Semantics appeared in the works (done in an independent way) of M. Fidel and D. Vakarelov, to provide an alternative semantics for Nelson Intuitionistic Logic with Strong Negation (see [6] and [15]).Nowadays, the constructions developed in these papers were adapted to a great number of logics (see [7], [8] or [9], for example).Thus, Twist-Structure Semantics are an object of numerous, deep and fruitful investigations.
Note here that an essential point in Twist-Structure Semantics is missed in D.S.S.: the torsion of the second axis.What is the reason of that torsion?Mainly, twist-structures, even being considered as C-matrices, usually were analyzed w.r.t its lattice-theoretic behavior.Under this perspective, the designated elements of every structure of the form H ⊆ T × T * are usually interpreted as the greatest elements (according to the order relation ≤ H , which is inherited from the "twisted order" obtained in T × T * ).For that, to obtain adequacy, it is usually necessary to consider the second axis with an inverted order, mainly because the algebras T and T * are not of the same kind.This is the case when T is a (Generalized) Heyting Algebra, for example.Actually, this torsion is often motivated with the necessity of an explanation of the behavior of a certain "negation connective", ¬ or ∼ (anyway, not every Twist-Structure Semantics is defined to explain negations: see [9], as an example applied to a logic without negations).Now, in the case of D.S.S., the required torsion has no sense: note that D.S.S. is dealing only with Boolean algebras, and every Boolean algebra B is isomorphic to its dual B * .
In addition, usually twist-structures are only considered as subsets of products of two algebras.On the other hand, D.S.S. is defined allowing that every discriminant structure can be embedded in an arbitrary finite product (according the number k of iterations).
On the other hand, we remark the coincidences between these two kind of semantics: in both cases certain classes of algebras are defined.Moreover, their underlying consequence relations are obtained by means of satisfaction of equations, as we have seen.Turning back to D.S.S., note that they are not depending neither on considerations of order relations, nor on the existence of negations.Since the treatment of these last semantics is more "algorithmic", they could be used in a general way (in the context of Matrix Logics), regardless its intuition.However, twist-structures can be viewed as more intuitive than D.S.S., mainly when they deal with certain "well-motivated" logics, where their connectives can be interpreted in a more natural sense.

Concluding Remarks
In this paper, we have showed a way to characterize C-matrices M = (A, D) by means of a process that codifies the truth-values of the support A according the characteristic function χ D .Moreover, we have demonstrated that this decodification, originally obtained for the Boolean algebra 2, can be generalized, by means of the definition of adequated equations, to every Boolean algebra B, in such a way that a new semantics (namely, D.S.S.) can be defined on the basis of a class S M of C-matrices.So, Discriminant Structure Semantics recovers certain algebraic character of that C-matrices that can be characterized in this way.Under our point of view, this implicit algebraic characteristics of D.S.S. are important, mainly because D.S.S. can be related to some problems concerning Abstract Algebraizability.Actually, a certain matrix logic that is not algebraizable 9 , but that can be characterized by means of D.S.S., is presented in [5].Indeed, this logic is L of Example 3.20.So, we consider that the study of the algebraic aspects of Discriminant Structures would be an interesting topic for future researches.
Besides that, an essential fact is showed in this paper, in Example 4.1: not every finite C-matrix can be associated to an adequate D.S.S., by means of a discriminant pair (β, a).So, a very interesting open problem that we propose here is the following: Which conditions (sufficent and/or neccesary) are needed for a C-matrix M = (A, D) to admit a discriminant pair (β, a)?Indeed, this kind of question is very usual in the context of Abstract Logic.For instance, in [16] (see also [3]) is given the definition of Referential Matrix Semantics (R.M.S).This notion, that provides a suitable matrix treatment for certain modal logics (mainly for the "based on possible world semantics" ones), is one interesting example of the scope of the general Theory of Matrices.So, it is natural to suppose that the field of action of R.M.S. is connected with the motivation and applications of D.D.S.By the way, one of the main results of [16] establishes that a given logic admits R.M.S iff it is Self-Extensional10 .We expect to obtain such kind of results, adapted to the case of D.S.S., in future works.Moreover, one of the topics of research that deserves attention is to relate D.S.S with self-extensionality (and, therefore, with R.M.S), and with other notions intrinsic to Abstract (Algebraic) Logic, as was previously commented.
Also, the problem of uniqueness (suggested at the end of Section 4) deserves a deep analysis: by the way, it is worth to note that, even when can exist two different discriminant pairs for a given matrix M , the matrices that both pairs determine are isomorphic.Anyway, this problem can be studied from different points of view: for instance, the relation between the D.S.S determined by each pair is an interesting matter of study.The connection between uniqueness of discriminant pairs and reduced matrices deserves a future research, too.
Finally, we remark the obvious relations of D.S.S. with Dyadic Semantics and with Twist-Structures again: these natural connections deserve a deeper study.For all the exposed, we think that this initial study of Discriminant Structure Semantics can be expanded to several interesting research lines.one report of the referees.In addition, the bibliography relative to Referential Matrix Semantics indicated below (also suggested in the reports of this work), seems to be a very interesting field for future investigations.

Definition 1 . 5 .
An abstract (sentential) logic is a pair L = (C, ), where C is a signature and ⊆ ℘(L(C)) × L(C) is a consequence relation for L(C).That is, it satisfies, for every Γ ∪ {α} ⊆ L(C), extensiveness, monotonicity and transitivity (we omit these well-known definitions).Definition 1.6.Given a signature C, a C-matrix is a pair M = (A, D), where A = (A, C A ) is a C-algebra and D ⊆ A. The elements of D are called the designated values of M .In the context of C-matrices, any t-ary operation c A ∈ C A will be called an A-truth-function.Besides that, the support of M is just the support A of A.

Remark 3 . 13 .Definition 3 . 14 .
(b) Every R M (B)-valuation w : L(C)−→A M (B) defines (w 0 , . . ., w k ), a k+1tuple of non-homomorphic functions w i : L(C) → A M (B), with w i :=π i •w (for every 0 ≤ i ≤ k).Note that R M (2) is the C-matrix M of Proposition 3.8, indeed.From this, Proposition 2.6 and Proposition 3.12 (a) we get that, for every Boolean algebra B and every prime filter ∇ ⊆ B, B/∇ is a Boolean algebra (which is isomorphic to 2) and so R M (B/∇) ∼ = M .As it is expected, the class S M determines a consequence relation on L(C): The class S M defines the consequence relation |= S M by means of the "local consequence relations" |= R M (B) , cf.Definition 1.7.That is: if R M (B) is in S M , then Γ |= R M (B) ϕ iff, for every R M (B)-valuation w : L(C)−→A M (B) such that w(γ) satisfies eq D for every γ ∈ Γ, it is valid that w(ϕ) satisfies eq D .And |= S M := |= R M (B) (with R M (B) ranging on S M ).

Corollary 4 . 3 .
The matrix M U rq does not admit discriminant pairs.

Proposition 5 . 1 .
The truth-values of the logic U rq of Example 4.1 can be discriminated by a set {f i } 0≤i≤4 (with f i = f (βi, ai) : A−→A) of truth-functions.