Thursday, March 28, 2024

On Hegel's Logic

Addendum: one must not be mislead by the apparent negative attitude to mathematics expressed in certain passages Hegel's published works. The following extract from his letters (see Wallace's translation of the Encyclopedia Logic xiv-xv) is very revealing:

'I am a schoolmaster who has to teach philosophy, who, possibly for that reason, believes that philosophy like geometry is teachable, and must no less than geometry have a regular structure. But again, a knowledge of the facts in geometry and philosophy is one thing, and the mathematical or philosophical talent which procreates and discovers is another : my province is to discover that scientific form, or to aid in the formation of it'.
' You know that I have had too much to do not merely with ancient literature, but even with mathematics, latterly with the higher analysis, differential calculus, chemistry, to let myself be taken in by the humbug of Naturphilosophie, philosophising without knowledge of fact and by mere force of imagination, and treating mere fancies, even imbecile fancies, as Ideas.'

It is easy to see Hegel's logic as a kind of romantic epic panlogist retelling of Kant's critical idealism with a half-veiled current of mystical enthusiasm characteristic of his times. Arguably, there is little in Hegel's logic that cannot be found by a careful and attentive reason of Kant's Kritik der Reinen Vernunft (and this if we ignore the non-negligeable input for Proclus).  However, we would like to argue that:

1. There are actually strong connections between Hegel's project and the subsequent logicism (and anti-psychologism) of Lotze and Frege (cf. how Hegel stresses the Begriff is not psychological).  Somebody noticed the Hegel-Gödel-Günther connection.

2. That Hegel's logic can be seen as an anticipation and quite thorough alternative realization of Husserl's phenomenological project and also as asking contemporarily pertinent questions in the philosophy of logic, specially with regards to relevance logic, substructural logic, etc.

3. Hegel's logic as very interesting connections to modern category theory and categorical logic and its interdisciplinary applications. His critique of formalism is also fertile from this perspective.

4. There are many interesting connections to be explored with ancient philosophy, with such diametrically opposite philosophers as Sextus Empiricus and Proclus.

5. Together with Leibniz, Hegel seems to have anticipated the idea of computation in nature. 

The section on Essence in Hegel's logic details classical metaphysics, Kantian metaphysics and finally the idealism of his contemporaries (Substance) whose shortcomings are finally overcome with the transition to Notion. We should compare the treatment of  Thing there to that in the Phenomenology of Spirit.

In the section on Being, in the parts involving the one and the other, limit, the infinite, the one, and the successive transitions involved - all this suggests a connection to a universal construction in mathematics: that of a completion. We have a finite structure which is limited, incomplete in some determinate way. For instance a field may not be algebraically closed. Something is missing, something extrinsic that needs to become intrinsic (cf. the process of finite algebraic extensions, adjoining roots, etc.).  Yet the finite completion, the finite adjoining of the missing aspects generates in turn its own incompleteness at the next level, and so on. This situation is only overcome by assimilation, by incorporation of this extension process, this outflow, into the structure itself - now invariant under this finite 'passing the limit'. This 'incorporation' is itself a kind of  higher 'limit', a limit of limits, so to speak (which has a universal property, or a minimality property). Embodiments of this process and structure are not only found in algebra but in essential ways also in measure theory (the Borel $\sigma$-algebra, regular and complete measures) and analysis (completion and approximation theorems). Recall how Hegel states in the Encyclopedia that the determinations of Being are exterior to each other, their process is the passing into another.

The passage to quantity: we compare this to the emergence of measures. Consider the proof of Urysohn's lemma. The main ingredient is the fact that in a locally compact Hausdorff space $X$ if we have a compact set $K$ contained in an open set $U$ then we can find an open set $V$ such that $K \subset V \subset \overline{V} \subset U$. This expresses $K$'s overcoming of its limitation relative to $U$. In the proof of Urysohn's lemma this process of overcoming the limit is multiplied to countably infinite sets indexed by the rationals and then completed into a continuous function, into continuous quantity. Somehow this relationship to the real numbers is implicit in the abstract concept of a locally compact Hausdorff space.  We should study situations in which valuations, measures, uniparametric semigroups of automorphisms, emerge.  What is the meaning of continuous functions with compact support ? And of Borel measures emerging from bounded positive linear functionals on spaces of such functions ?  Also investigate the passage of quantity again into quality.

Interesting concepts are those of extension (of a functional), restriction,  Riesz representation via a measure and the maximum modulus type result (maximum only at the boundary).

Note the distinction between pure quantity and determinate quantity.  Pure quantity or indifferent quantity (apeiron) is clearly topological, corresponding to what is implicitly determined by a variable, a sequence, a net.  See our paper on the modern incarnations of the Aristotelian concepts of topos and continuity for a detailed discussion. Attraction is aggregation, cohesion, expressed by the open sets of a topology.  Repulsion is discreteness . This has been treated in our note on higher topos theory (in connections to Lawvere and Schreiber). The duality and mutual transition between the continuous and discrete is of utmost interest (geometric structures give rise to algebraic structures, these in turn have geometric realizations).  Intensive and extensive quality is nothing but the concept of sheaf. The treatment of measure recalls Morse theory and singularity theory, moduli spaces; also the imposition of growth conditions in analysis.  But in the initial section of the Greater Logic on measure there is a suggestion that measure is a kind of conservation law; that which remains invariant under actual or possible or conceivable change and variation. Measure is clearly related to mathematical physics; Hegel states that a measure now is not pure quantity but quantity in relationship to something exterior to itself. Hegel also raises the question of natural vs. artificial units of measure. He introduces gradualness (i.e. continuous change in quantity with surprising jumps in quality) and mentions the Sorites paradox. Hegel's interpretation is perhaps not at all incompatible with our own interpretation in terms of locally constant sheaves.  Local homogeneity and uniformity is reconciled with global non-triviality. Hegel's knotted line = constructible sheaf, the 'knots' correspond to the stratification (cf. singular points of a variety, bifurcation set, etc.).

The section on measure seems to echo some of our ideas on general systems theory concerning how systems combine into larger systems or split again, etc. Hegel uses chemistry and thermodynamics. It also suggests the idea of program or function which can be integrated with different inputs producing different behaviours yet still maintaining its essence. Note also that combination and mixture features in Aristotle's Topics (much of the discussion in Measure should be studied in conjunction with Aristotle's Physics).

The section of Essence is related to fundamental dualities in mathematics (equivalence between distinct categories) as we have seen before. Essence is captured by adjunctions between categories.  From one category we are lead naturally to ask about its correspondent in the other (cf. the inverse Galois problem). In algebraic topology this illustrates the mutual dependency and subsumption of quality (topological space) and quantity (for instance, abelian groups). 

But first of all, the homotopy type theoretic interpretation of an identity type $t =_A t$ as a closed path expresses admirably reflection into self by reflection into another. The emergence of difference is expressed by the monodromy of a closed path.  Hegel's observations on the law of the excluded middle suggests connections to intuitionism and the types as spaces paradigm (and the topological nature of genera). Grund is to be seen as the general idea of a deductive system.  In a category the determinations of an object spring from its relationship to others (the Thing)For instance, being a terminal object.

It might be interesting to interpret the section on Appearance (phenomenon) in relationship to the duality between theory and model (in the sense of logic).  But Hegel focused on 'force',  on the interior vs. exterior relation.  Effective Reality might be expressed in Goguen's institutions, adjunctions between theories and models (or as in Goguen's approach to algebraic theories), that is the unification of both points of view and the incorporation of their mutual determination.  Cf. the duality and circle between specification and verification. Further development of Effective Reality, reciprocal action, can be seen as involving concurrency (in the sense of theoretical computer science) and considerations on modality (cf. Hegel's the truth of necessity is freedom).

Notion is related to group actions (or model theoretic generalizations). The category of group representations represents the manifold (in the sense of plural) but united realizations of the notion.  Principle bundle $PG$ (the universal), a cocycle (the particular), the associated bundle (individual). But of course a central place in the Notion will be played by categorical logic. The transitions in subjective notion to objective notion are reflected in the progress of internalization of proof as we outlined in a previous draft. Also at the level of objective notion systems theory comes in as well as a the duality between theory and model.

Notion is not a theory of mind but the necessary template for a theory of mind, upon which a theory of mind must depend.  Our post on 'Internalizing Tarski' may express some of the final higher stages of the unfolding of Notion, related to Knowledge in the aspect of limitation vs. infinite and reflection-into-self.

Hegel's Logic is to be viewed as the realization of the project of Proclus' metaphysics, psychology and epistemology: the unveiling of the  pure  universal essences implicit in mathematics but which in themselves transcend mathematics: the logoi.

What is Hegel's view on the operations on quantity, and their abstract properties, in particular distributivity and linearity ? In the Encyclopedia some considerations are given on arithmetical operations and the logical progression from succession, counting, addition, multiplication to exponentiation.  For instance multiplication is seen as a counting of multiplicities as units.  It would be interesting to study from a Hegelian perspective the passage from an ordinary category to an additive category to an abelian category which explodes in a richness of properties. The abelian group structure on morphisms is the introduction of quantity or measure in a previous abstract discrete set.  But the $\mathbf{0}$ object is interesting. In general for pointed categories the zero object is a synthesis between initial and terminal object. Indeed this follows from the general setting of requiring the hom-sets to be pointed sets. This entails the weakest form of coherence: there is a least one morphism connected any two objects in the category.  This can follow from postulating the existence of a morphism $\mathbf{1} \rightarrow \mathbf{0}$.  Note the automorphic simplicity of terminal and initial objects: $hom(\mathbf{0}, \mathbf{0}) = \{id_{\mathbf{0}}\}$, etc. Thus if there is a morphism of the initial object into the terminal object then the two are isomorphic. We must examine carefully the construction of quotient categories, in particular via multiplicative systems. For this is a abstraction of the definition of rational numbers. Find parallels between Hegel and the transition between an abelian category  $\mathcal{A}$ to the category of complexes $\mathbf{C}(\mathcal{A})$ to the homotopy category $\mathbf{K}(\mathcal{A})$ to the derived category $\mathbf{D}(\mathcal{A})$ where with the infinite regress of a complex is in a way overcome through a return to self via the triangulated category structure.

Wednesday, March 27, 2024

A new approach to CIL

We have seen that there are canonical transformations between CIL, $\lambda$IL and $B$. In fact there may be a more direct transformation of $\lambda$IL to CIL which does not involve reduction to the long normal form. The great interest of CIL models is that they provide - due to the above correspondences - a semantics for the non-classical lambda calculus $\lambda$IL (we leave out for a moment the stronger systems involving object-extensions). Essential to the definition of CIL model is the direct translation into model operations of CIL's  'laws of sense', an extensive list of amalgamation and permutation relations. Why not define all these via $\lambda$IL ? Given a term $T$ of CIL let $T^\circ$ denote the corresponding term in $\lambda$CIL. Given a term $t$ in $\lambda$CIL let $t^\bullet$ denotes its corresponding term in $CIL$.  Then we transpose $\beta$ and $\eta$ reduction  in $\lambda$CIL to obtain rules in CIL.  That is $T_1\leadsto T_2$ if by definition $T_2 = s^\bullet$ and $T_1^\circ \leadsto s$ in $\lambda$IL through $\beta$ and $\eta$ reduction. This drastically simplifies things. Our 'laws of sense' become a lemma rather than part of a definition. We see also that $\lambda$IL plays the mediating role between  CIL (semantic tool) and B (the original syntactic system). An important task in now to make explicit the direct translation from $\lambda$IL into CIL.

The following paper is of great interest for the CIL project:

A Complete, Type-free "second-order" Logic and Its Philosophical Foundation, Christopher Menzel, 1986. 

Let us define then $T^\circ$ for a CIL term $T$.  For simplicity we identify the primitive terms in CIL and $\lambda$ IL. Thus for a primitive term $P$ we have $P^\circ = P$.

Given $s \in S^n$ we can define the application of $s$ on any ordered set having $n$ elements, in particular on a length $n$ sequence or $n$-tuple of variables.  Given a $p \in \Pi^n$ we write $p^n_k$ if it is a $k$-partition of $(1,...,n)$. Then we similarly define the application of $p$ on any ordered set of $n$ elements, etc.  Furthermore given a set with $k$ elements $(x_1,...,x_k)$ we define $p^\star (x_1,...,x_k)$ to be the sequence $(x'_1,...,x'_n)$ in which only the $x_i$ occur and for which $p(x'_1,...,x'_n) = (x_1,...,x_k)$.   Now we define \[ (per_p T)^\circ = \lambda x_1....x_n. T^\circ p(x_1....x_n) \text{  for $p \in S^n$}\] \[(link_p T)^\circ = \lambda x_1....x_k. T^\circ  p^\star(x_1,...,x_k) \text{ for $|p| = k$}\]

 Given an comb-sequence $s$ we denote by $s^\star$ the sequence obtained from $s$ by removing all entries which are $\star$. Let $|s| = n$ and $|s^\star| = k$.   We have that $\tau(s)$ is a map from $(1,...,k)$ to $(1,...,n)$ which assigns to $l$ the position of $s^\star_l$ in $s$. We denote the partial function which is the inverse of $\tau$ by $\sigma$.

Given a comb-sequence $s$  and a sequence of sequences of variables $m$ of length $n$,  we let $V(m,s)$ be a sequence of sequences of variables $(u^1,...,u^k)$ where $k= |s^\star|$ and such that $|u^l| = s^\star_l$ and $u^l$ consists of the first $|s^\star_l|$ variables of $m_l$ (we assume that $|m_l| \geq |s^\star_l|$).

A term $T$ is in standard form if it is of the form $\lambda x_1...x_n. T'$ with $T: 0$ (hence $T: n$). Any term is equivalent to one in standard form. When we define $(comb_s T S_1...S_k)^\circ$ we assume that $T^\circ = \lambda x_1....x_n. T'$ and the $S_i^\circ$ are in standard form $\lambda x^i_1....x^i_{n_i}. S'_i$ and that the $x^i_j$ are all disjoint for different $i$.  We set $m$ to be equal to the sequence of the sequences $x^i_1...x^i_{n_i}$ of the $S_i$.

\[(comb_s T S_1...S_k)^\circ = \lambda w_1....w_n. T'[x_{\tau(1)}....x_{\tau(k)} / S''_1....S''_k]\]

where $w_i = V(m,s)_{\sigma(s)(i)}$ if $s_i \neq \star$ and $w_i = x_i$ otherwise.  We have that $S''_i$ is obtained by removing $s^\star_i$ variables from $\lambda x^i_1....x^i_{n_i}. S'_i$.

We can also give an alternative definition which does not involve substitution but only abstraction and application.  \[(comb_s T S_1...S_k)^\circ = \lambda w_1....w_n. (T^\circ W_1...W_n)\] where $W_i = S''_{\sigma(i)}$ for $s_i\neq \star$ and $W_i = x_i$ otherwise.

Let $s$ be a dum-sequence of length $n+1$ and $w = x_1....x_n$ a sequence of variables. Then we define $s(x_1....x_n)$ to be the sequence $y_1....y_m$ where $m = \Sigma_{i=1}^{n+1} s_i$ obtained by inserting $s_i$ fresh variables before $x_i$ in $w$ for $i\leq n$ and $x_{n+1}$ fresh variables after $x_n$. Thus we define \[(dum_s T)^\circ = \lambda s(x_1....x_n). T^\circ x_1....x_n\]

Finally we define $(log^\neg T)^\circ = \neg T^\circ$, $((log_\& TS)^\circ = \&T^\circ S^\circ$ and $(log^n_\exists T)^\circ = \exists...\exists T^\circ$ ($n$ copies of $\exists$).

Saturday, March 23, 2024

Internalizing Tarski

Take standard first-order semantics.  Consider a model $M$ of a certain theory $T$. Now suppose that we wish to include a subset a $\Psi \subset M$ to represent first-order formulas $\phi$. We denote the element corresponding to a formula $\phi$ by  $\bar{\phi} \in \Psi$.  Suppose we wanted all subsets of $M$ to also have internal representation via elements set $\Omega \subset M$. Given $X \subset M$ let the representative be denoted by $\bar{X} \in \Omega$.  Then we wish to have a predicate $m(x,y)$ such that

\[    M \Vdash  m(\bar{X},\overline{\phi(x)})  \text{  iff }  \{z \in M:   z \in  \mathcal{I}\phi(x)\} = X \]

where $\mathcal{I}$ is the Tarskian interpretation in $M$. We will investigate later if this can lead to a paradox or limitation result.

Now suppose that in FOL we had an operator $[\,]$ which transforms a formula $\phi$ into a constant $[\phi]$: the free variables in $\phi$ this time (unlike in Bealer's system) are not free in $[\phi]$, it is a constant (or closed term).  Suppose we had a predicate $Sub(a,b)$ which holds precisely when $a = [\psi(x)]$ for some $\psi(x)$ with single free variable $x$ and $b = [\psi([\psi(x)])]$.  This expresses the reflection-into-self of $a$.

Now suppose we had a truth-like-predicate $T(a)$ which holds precisely when $a = [\phi]$ for some sentence $\phi$  which is, for instance, provable in a deductive system $D$ or true in a given class of models $M$, etc.

We assume we have the basic properties: i) $T([\phi]) \rightarrow \phi$ and ii)  $T([\neg \phi])$ iff $ \neg T([\phi])$.

Consider now the formula $G(x) \equiv  \neg\exists y. Sub(x,y) \,\&\, T(y)$ and let $Sub([G(x)], g)$.

Suppose that $T(g)$. Then by hypothesis $Sub([G(x)], g)$  and so  by definition of $G(x)$, $\neg G([G(x)])$ and hence by i) and ii) we get  $\neg T([G([G(x)])])$, that is, $\neg T(g)$, a contradiction. 

Note how $g$ involves a further reflection-into-self mediated by negation. The result is thus a limitation on predicates satisfying i) and ii).  It seems very likely that the above argument could be transposed to Bealer's system.

In Bealer's system if we postulate $y\Delta [\phi(x)]_x \equiv \phi(y)$ then we can obtain  a Russell-type paradox (without relying on practically any other axiom).

A very interesting kind of 'limit theorem' involves how formal systems (such as $HA^\omega$) cannot represent the totality of total recursive functions.

Also the curious fact that we can define the set of indefinable real numbers.

This reflection-into-self, can it be given a geometric embodiment (if we wish to consider the connection between geometry and logic as in topos theory) ?

The downward Löwenheim-Skolem theorem is not the same kind of limit result. We consider that it is formalized within ZF(C) itself. If we do not accept this (i.e. the self-reflectivity of ZF(C)) then we are done. Otherwise we see that ZF(C) reflected-into-itself is aware that it is not really set theory, but only a countable reflection of set theory. This is a positive result. A limited system is  itself 'aware' of its own limitation.

Friday, March 22, 2024

A note on the quantum version of a general systems model

For simplicity we consider that our system has a genesis at time $t_0$ with state $s_0 \in S$.  We denote by $S^{T_{t_0}^t}$ the set of paths $p: [t_0,t] \rightarrow S$.  For each $t > t_0$ we assume that $S^{T_{t_0}^t}$ is endowed with the structure of a $\sigma$-algebra. Problem: find the coherence conditions involving restriction maps for different $t$.  We denote by $\mathcal{M}S^{T_{t_0}^t}$ the set of complex-valued measures on $S^{T_{t_0}^t}$.  Then a general quantum system is given by a family of maps

\[\mathcal{S}_t: I^{T_{t_0}^t} \rightarrow \mathcal{M}S^{T_{t_0}^t}\]

with a suitable set of coherence conditions which relate the maps for different $t$. It seems natural to require that the restriction map on state paths be measurable. Problem: interpret the contextuality vs. non-contextuality aspect of Gleason's theorem in this framework. Also how do we interpret the output function ?

Another version: fix complex-valued mesaures $\mu_i$ on $S^{T_{t_0}^t}$ and defined a system instead as a family of maps

\[\mathcal{S}_t: I^{T_{t_0}^t} \rightarrow \mathcal{D} S^{T_{t_0}^t}\]

where  $\mathcal{D} S^{T_{t_0}^t}$ is the space of measurable complex-valued functions $f: S^{T_{t_0}^t} \rightarrow \mathbb{C}$. Then probabilities for a given input path $J$ are calculated by integration. for instance by

\[| \int_{\Omega}\mathcal{S}_t J d\mu_t |\]

Thursday, March 21, 2024

$\lambda I$ and CIL

 Let us recall the definition of $\lambda IL$. It consists in a collection of primitive terms $M$,$N$,$P$,..., each having an assigned sort $n\geq -1$ and a collection of variables $x,y,z,...$. Terms are formed as follows:

1. A primitive term is a term.

2. If $T$  is a term of sort $n \geq 1$ and $S$ is a term then $TS$ is a term of sort $n-1$.

3. If $T$ is a term  of sort $n\geq 1$ and $x$ a variable then $Tx$ is a term of sort $n-1$.

4. If $T$ is a term of sort $n\geq 0$ and $x$ a variable then $\lambda x. T$ is a term of sort $n+1$.

5. If $T$ is a term of sort $n\geq 1$ then $\exists T$ is a term of sort $n-1$.

6. If $T$ is a term of sort $n\geq 0$ then $\neg T$ is a term of sort $n$.

7. If $T$ and $S$ are terms of sort $n\geq 0$ then $\&TS$ is a term of sort $n$.

Note that an isolated variable has no sort and is not a term. If $T$ is a term of sort $n$ we write $T:n$. We have the variants of the usual concepts of $\beta-$ and $\eta-$equivalence. Thus there is also the concept of long normal form. We also need additional rules for $\exists,\neg,\&$ which we call $\zeta$-rules.  

\[ (\exists T) S \leadsto  \exists (\lambda y. TyS) \]\[(\neg T) S \leadsto \neg (TS)\]\[(\&TS)W \leadsto \&(TW)(TS) \]

We also need rules for the interplay of $\lambda$ and the logical operators. For instance $\lambda x. \neg T \leadsto \neg \lambda x. T$, etc.  Note that application corresponds only to a very special case of $comb$. Thus the associativity we find in other cases of $comb$ does not even apply to ordinary application.

There is a algorithm which transforms a term of $\lambda IL$ into a term in $CIL$ and vice-versa.  If the term is in long normal form we can read off a $B$ term directly. As in goldilocks and the three bears, after trying $CIL$ and $B$ we find that $\lambda IL$ is just the right combination of versatility and simplicity.

We no longer follow our initial ideas on definite descriptions and the Peano operator. We view, as did Bealer, elements of $D_{-1}$ as individuals or individual concepts whose 'extensions' are themselves.  We emphasize the the extension of a $D_i$ is not a 'reference'.  Propositions do not reference truth-values and properties and relations do not reference sets. How to deal with the reference of propositions is an open problem as is the meaning of definite descriptions and strict proper names. For the current purposes of CIL we endorse a Kripkean rigid designator or 'tagging' approach to the relationship between certain constants and elements of $D_{-1}$. The problem we wish address is that in fact extensions do become referents ! The extension of a property or relation, etc. is an (ideally) well-defined object (at least as parametrized by states-of-affairs).  So we introduce the operator $\{T\}$ in $\lambda IL$ which if in $D_n$ for $n\geq 1$ then $\{T\}$ corresponds in a model to a subset of $D^n$. This can be a singleton for $T_1$ or even $\emptyset$. But what sort is $\{T\}$ ? If it is to be $-1$ then $D_{-1}$ must be some kind of model of set theory, that it, it must be able to internalize $\mathcal{P}D^n$ - or can we make sense - by transfinite inductive definition - of setting $D_{-1}$ to be $\mathcal{P}D^n$ itself ?

For instance we could define for an ordinal $\alpha$:

\[\mathfrak{D}_0 =  \bigcup_{n\geq 1} \mathcal{P}D^n \cup\bigcup_{n\geq 1} D_{-1}^n\] \[\mathfrak{D}_{\alpha+1} = \bigcup_{n\geq 1}\mathcal{P}\mathfrak{D}_\alpha^n \cup  \mathfrak{D}_\alpha \text{    for $\alpha$ a successor ordinal}\]\[\mathfrak{D}_{\beta} = \bigcup_{\gamma < \beta} \mathfrak{D}_{\gamma} \text{     for $\beta$ a limit ordinal}\]

 and then set $\mathbb{D}_{-1} = \bigcup_{\alpha\in ord} \mathfrak{D}_\alpha$. It seems we should also impose a condition that for any element $d$ of $D_i$ for $i\geq 1$ the intersection of its extension with $\mathbb{D}_{-1}^{(n)}$ should have elements all belonging to a certain $\mathfrak{D}_\kappa$.

Thus we add to $\lambda IL$ the term formation rule:

5. If $T:n$ is a closed term then $\{T\} : -1$.

Observation: we could introduce a set-theoretic membership relation $\epsilon \in D_2$ and an intensional extension membership relation $\Delta \in D_2$ (restricted so as to avoid Russell-type paradoxes). So that $(x,y)$ belongs to the extension of $\epsilon$ iff $x,y \in \mathbb{D}_{-1}$ and $x \epsilon y$ (as sets). And $(x,y)$ belongs to the extension of $\Delta$ iff $x \in {D}_{-1}$ and $y \in D_1$ and $x$ is in the extension-set of $y$ and also...

Another major theme to explore is the relationship between the problems with concepts, individual concepts and definite descriptions in particular, and Cohen's forcing methods.  Some interesting suggestions are provided by the forgotten Portuguese philosopher Arnaldo de Miranda Barbosa (1916 – 1973) who was a contemporary of Edmundo Curvelo (1913 - 1954). The open, dynamic, revisable nature of concepts (existing at the same time as their essential fixed attributes), in particular the 'infinite' and potential nature of individual concepts is admirably captured by forcing notions.  Thus individual concept = generic filter in a forcing poset $(\mathbb{P}, \leq, 1)$, or more generally, a $\mathbb{P}$-name which we can think of as a set who members are qualified by elements of  $\mathbb{P}$. This is understood as follows.  To describe a set we list its members and then the members of these members and so forth until arriving at $\emptyset$ or atoms. A $\mathbb{P}$-name is just such a set but when we list a member we attach a $p$ to this membership relation. The $G$-valuation, for $G$ a generic filter, of a $\mathbb{P}$-name just 'prunes' the vertices of this labelled set whose corresponding $p$ is not in $G$. Since $G$ is a filter  given a set $x$ we can view it as a name with all $p$ equal to one. It is then its own $G$-valuation. It is also easy to get $G$ itself as a valuation of a certain name.

Wednesday, March 20, 2024

Meaning and Geometry

Concepts do not seem really to refer, at least not in the way definite descriptions seem to.  Not only do not concepts refer to their extensions, but extensions themselves are vague, incomplete, fluid, ambiguous and modally conditioned. Worse than that, we habitually make extensions into objects of reference, although in a qualified way. For instance 'all people alive today' may claim the title of extension of the concept 'people'.  Individuals themselves are problematic, knots and shifting meshes of unified meanings. But if we look at concepts qua meaning - something which is not new -  concepts in terms of comprehension, then there are some suggestive mathematical analogies.  The correspondence between radical ideals of a finitely generated $k$-algebra and algebraic sets in algebraic geometry. This is a kind of Galois correspondence between the lattice of such ideals and the poset of algebraic spaces (and there is a similar result for commutative $C^*$-algebras). A contravariant functor $M : \mathcal{A} \rightarrow \mathcal{B}$ so that what is 'greatest' in comprehension (i.e. a maximal ideal) corresponds to what is smallest in extension (a point); for $k$ an algebraically closed field we have that the whole space in turn  corresponds to the empty set.  In the case of a commutative ring $R$ this duality is somehow entirely contained within the ring itself; for we can look at $spec(R)$ as a space of ideals (meanings) or at the Zariski topology on $spec(R)$, making $spec(R)$ a topological space. Thus we can look at the elements of this topology (either open or closed set) as spaces or extensions.  A similar situation exists in Galois theory in which we are concerned with  subgroups of $\Gamma(N,K)$  for $N$ a separable normal extension of $K$.  The bigger the subgroup the smaller the corresponding fixed field $K< L < N$.  Group actions are perhaps the analogues of meaning while the spaces they act on (which can be the group itself) are like extensions (in particular group-action orbits are like extensions, they express the unfolding of the content, the possible interpretations of the group on a given point).  Also in forcing techniques it is usual, given a forcing poset $(\mathbb{P}, \leq, 1)$,  to write $p\leq q$ and interpret it as $p$ expressing more information than $q$ or as $p$ extending in a consistent way the information provided by $q$. The idea of $p$ being smaller stems from the fact that the range of possible 'ideal' objects corresponding to $p$ becomes smaller when $p$ conveys more information than $q$.

Modality involves a continuous variation, in particular an imaginary deformation of actual circumstance or patterns of experience. It is difficult not to compare this to the concept of moduli space and deformation in mathematics. If group actions express the unfolding of extensions of concepts then this is modally constrained. Hence it is more natural to consider moduli spaces of group actions (or deformations of rings) rather than merely the group action itself.

We also mention that some structures seems to have inexhaustible meaning. That is they generate in a canonical way potentially infinite (though often periodic) sequences. A famous example is the open problem of the higher homotopy groups of the sphere.

One of the biggest errors in philosophy is the idea that by formally analyzing the grammar of a language (and in particular modern English) somehow logical, semantic and ontological knowledge will be handed to us on a plate (and Frege was keenly aware that this is a mistake). 

Sunday, March 17, 2024

Transcendental idealism from Bourbaki to Proclus

A proposal for a new transcendental idealism in which pure mathematics plays a central role - much as it did for Plato and Proclus.  Here are some of the tasks:

1. Study the continuity between ancient Greek mathematics and modern mathematics.  The Greeks had an arithmetic and a geometry, but also the idea of a common mathematics which, it seems, attempted to study common structures of both arithmetic and geometry. Also there was applied mathematics like optics, mechanics, astronomy, etc.  Greek arithmetic corresponds to modern algebraic number theory. Greek geometry to modern algebraic geometry. Common mathematics to commutative algebra.  Thus the incredible and unique development of pure mathematics in France by the Bourbaki group and Grothendieck's school of Algebraic Geometry represents a genuine continuation of the spirit of Greek mathematics.  We must study the philosophical significance of the structures of (finite) group, commutative ring and field (and also linear algebra, group representations and non-commutative algebras and rings).

2. Setting aside category theory we must abstract and study universal forms and processes manifest in modern 'common mathematics', i.e. modern algebra, and how these express the fundamental architectonics of Proclus' metaphysics, specially as outlined in the 'Principles of Theology'.  For example, how the complex can be built up from the simple (decomposition theorems). Or how the complex can be restricted to express simplicity is different ways (different localizations of a ring). How simple objects function as generators and measures or numbers (cf. discrete valuation rings). The omnipresence of hierarchies (in particular their completions such as in the construction of the algebraic closure),  finitude conditions,  dualities, symmetries (in particular of figures in $\mathbb{R}^n$ or the famous Coxeter groups) and different ways of reading finite spectra of structures (Noetherian and Artinian conditions, compositions series, derived series,  (co)homology sequences, etc.) . Read Albert Lautman. The 'simplicity' of a structure is sometimes related to the action it has upon itself, in particular in the sense of being able to 'return to itself', that is, the action being transitive - (for instance the case of a group being simple or the fact that all Sylow $p$-subgroups are conjugate). Note the curious duality of $S_n$ with regards to $A_n$ which are simple for $n\geq 5$.

3. Another approach to logic. Abstract logic vs. the concrete living enriched logic - mediated by appropriate special domains - which reveal the essence of logic in away which necessarily requires the energeia of logic itself.  This is to be investigated in conjunction with Proclus' theory of logoi (see Gregory MacIsaac, The Soul and Discursive Reason in the Philosophy of Proclus, 2001).

4.  Consider the sheafication operation and its application to the construction of the inverse image functor for sheaves $f^*$. There is a moment of atomization by taking stalks and the there is a re-integration by re-instating local coherence.

Is there really such a thing as a merely verbal dispute ?

It seems dubious, in general.  For people do not in general debate whether a certain expression S should or should not be assigned to a certain meaning M  or reference O , save for philological reasons such as in the Cratylus, etc.  Even then it is assumed that M and O have been correctly agreed upon and fixed by shared vocabulary.  There could be a dispute among lexicographers about what meaning a given linguistic community assigns to a given expression or whether a proposed definiens is apt. But this is not normative, or only secondarily in the sense that a speaker is then bound to conform to 'correct use'.  The idea of a merely verbal dispute has its origins, perhaps, in legal disputes, quid juris and quid facti. Thus is X A ?  can be seen as a debate either about facts or about the correctness of the correspondence of the facts with a legal deduction and definition.  But is does not seem very interesting if a common legal definition of A is not agreed upon. Rather the debate will involve details of deduction both legal and factual. Or else the definition can be meta-legal about proposals of new definitions, which is fine. But the intention being thus, there is nothing metalinguistic here.

Wednesday, March 13, 2024

Non-well-founded infinity

 Suppose we wish to define an enigmatic property $\Pi(x)$ of a set $x$.  We have the condition

\[\Pi(x) \leftrightarrow \Pi_1(x) \vee \exists y \in x. \Pi(y) \]\[ \neg \exists y. y \in x \, \rightarrow \,  \Pi(x)\] That is, if $a$ is an atom then $\Pi(a)$. Our question: is it true that $ZFC \vdash \forall x. \Pi(x)$ ? In what sense in ZFC is the whole always 'more' in some special sense than the part ?


Sunday, March 10, 2024

Self-reflectivity in natural language

Consider our scheme for natural language $(E,S,O, s, r)$ where $s: E\rightarrow S$ and $r : S \rightarrow O$ are the sense and reference functions (relations).  

What is a word ? It is in general a complex $(e,s,o, S, R)$, consisting of an expression $e$, its meaning $\sigma$, its reference $o$ and the relationships themselves $S$, $R$, existing between $e$ and $s$ and $s$ and $o$ respectively.

Every element or subsystem of the complex can itself be an object, an element of $O$.  For example I can talk about the relation between the expression 'rose' and the concept/meaning rose. We need syntactical devises to make this clear and unambiguous. When we throw around words like rose carelessly we are in general mixing many different entities and relations into one bag.

As Frege noted, the elements of $S$ can themselves be objects. Let $\sigma\in S$. Then there is a $w \in E$ and $\sigma' \in S$ such that $s(w) = \sigma'$ and $r(\sigma') = \sigma$. Why not have a sense which is its own reference ? Or a sense which has as reference the expression of which it is the sense of ?  Or an expression whose reference is the expression itself ? 

And what about aggregates of senses ? Indeed the reference of concepts are, perhaps, objects which are aggregates : the extension of rose. We need a syntactic devise for this too...

In CIL models we need to introduce a subset $E \subset D_{-1}$ which consists of a model-theoretical representation of the expressions of the language.  We could then internalize $s$ by a predicate $\Delta_E (e, p)$ which reads: the (closed) expression $e$ has sense $p$. But how do we treat $s$ itself as an object ?  If we consider that every subset of $D$ has a representative in $E$ then it seems this will lead to trouble...

Saturday, March 9, 2024

Proposal for a new logic

 A new logic based on the following two rules:

1. There are no formulas besides sentences. Expressions with 'free variables' have no place either syntactically, deductively or semantically.

2. Unbounded quantifiers are meaningless. All quantifiers must specify a domain in which the variable is quantified. For instance in the form $\forall x. M(x) \rightarrow \phi$ which we write $\forall_{x \in M}. \phi$. In CIL $log^Q$ would be a binary operator $log^Q T^{(1)} S$.

Hence ordinary equality $x = x$ must already express a restrictive condition. There must be some $A$ such that $\exists_{x \in A}.x \neq x$.  

For a concept we can distinguish its comprehension from its extension.  There is a duality between the two. The greater the one, the smaller the other and vice-versa.  But this is what we find in algebraic geometry in the duality between (radical) ideals of a finitely generated $k$-algebra and affine varieties. Maximal ideals have the smallest extension but the greatest comprehension.


Sunday, March 3, 2024

What is a system ?

We cannot leave out time to begin with. Nor possibility, multiple times.  A first attempt would be as follows. A system involves an input $I$ space, an state space $S$ and an output space $O$. We consider that the outputs are completely determined by states, that is, there is  function $\phi: S \rightarrow O$.  Let $T$ be time which has a total order $<$. Given a set $X$ let $X^T$ be the set of all maps $f : T \rightarrow X$.  Then a system  $\mathfrak{S}$ over $(I,S,O,\phi)$ is a function $\mathcal{S}: I^T \rightarrow \mathcal{P}S^T$.

$I$ has a distinguished element $\bot \in I$ which represents doing nothing. We write $f \triangle g$ for $f,g \in X^T$ if there is a $t \in T$ such that $ t' < t$ implies that $f(t') = g(t')$.

A link function is a function $\upsilon : O \rightarrow I$.  With a link function we can define the plugging of one system into another.  The link function can also be considered as $O \rightarrow I^n$ expressing simultaneous different outputs.

$I$ can be given the structure of a category so we can define composition, etc.  We consider the possibility of multiple input and multiple output (and composition, plugging in of systems, including feedback). Thus there is an abstract cartesian product for a more general version of $I$ (and $O, \phi$, etc.).

We define $X^T_<$ to be the set of all functions $f : \downarrow t \rightarrow X$ where $\downarrow t =\{ t' \in T: t' < t\}$ for some $t$.  Then another definition of system is a function $\mathcal{S}: I^T_< \rightarrow \mathcal{P}S^T_<$ which respects $t$.

We can consider a category $\mathcal{I}^T_<$ consisting of sets  $I^{T^{t}_<}$ of functions $\downarrow t \rightarrow I$  for different  values of $t$. If $t_1 < t_2$ then we have natural restriction map.  This is a category. Likewise we can define categories $\mathcal{S}^T_<$ and $\mathcal{O}^T_<$.  We can also define the category  whose elements, for a given $t$, are sets of elements of $I^{T^{t}_<}$. A system is then a functor

\[ \mathcal{S}:  \mathcal{I}^T_< \rightarrow P\mathcal{S}^T_<\]

$S$ is itself a moduli space of structures, networks. Thus $S = S_s \times S_d$ where the first component is a configuration and the second the global state. 

Note that $I$ is more general than 'information', its can also include matter and energy as in biosystems.

In general our system will be a composition of other systems. We will need a calculus of dynamic reconfiguration , merging, separation, etc.

This second definition of system is better; it is the only one accessible to us, for we cannot observe infinite input,output or state histories.  

Perhaps one of the most fundamental division is between systems having an origin (and perhaps an end) and those that are not postulated as having an origin. Thus we need a special state $\bot \in S$ of non-being.  Then we can ask: does the system come into being because of a certain input (does this even make sense) or not ? We can conceive a kind of output from another system which is a self-replication or construction or generation of the new system.  The state at the first instance $t_0$ in which the system has come to be, is its initial state $s_0 \in S$.  Thus a generated system is given by a map (functor between categories) $\mathcal{S}: I^{T^{t_0}_<} \rightarrow \mathcal{P}S^{T^{t_0}_<}$ which respects $t$.

Note that the output or state-change is not necessarily to be considered instantaneous. Our framework is general enough to capture delay, feedback and a great variety of different kinds of causality, quasi-causality, indeterminism, etc.  It is however important to be able to give the images  $\mathcal{S}(s)$ in $\mathcal{P}S^{T^{t_0}_<}$  the structure of a $\sigma$-algebra and a measure (in particular complex-valued).

Thus a quantum system associates to each input path $p$ from an initial time $t_0$ to a time $t$ a  space $\Omega_p$ of possible state paths from time $t_0$ to $t$ together with a (complex-valued) probability measure on $\Omega_p$. We assume that all the $\Omega_p$ are endowed with a $\sigma$-algebra structure in a coherence way (perhaps as induced by such a structure on the space of all state paths from $t_0$ to $t$).

If $T$ is the usual real line, then we can consider input sequences which are almost everywhere $\bot$ or only have a finite number of non-$\bot$ inputs. In this special case the framework above reduces to the classical Feynman formalization.

Saturday, March 2, 2024

Is natural language more fine-grained than CIL ?

The following example was proposed:  (A) 'Mary knows that she likes herself' and  (B) 'Mary knows that she likes Mary'. 

(A) corresponds to    $comb_{(0)} link_{\{\{1,2\}\}} comb_{(\star, 1)} K link_{\{\{1,2\}\}} L \,M$

(B) corresponds to     $comb_{(0)} link_{\{\{1,2\}\}}  comb_{(\star,1)} K  comb_{(\star,0)}  LM \,M$.

We note also that $\lambda$CIL can be given a another form.  Instead of using $comb$ we use application and for instance $\lambda x. D^{(2)}x T$ (assuming $\eta$-equivalence).

Quodlibet

 1. René Thom called quantum mechanics 'the greatest intellectual scandal of the 20th century'. Maybe this was too harsh, but quantu...