Friday, January 17, 2025

Hume, the most misunderstood philosopher

We grant that the Treatise may not be a entirely consistent work and that its precise aim may still be quite unclear.  But this does not erase the fact that Hume has suffered historically from being appropriated, perverted and misrepresented by subsequent generations.   Hume has had only few serious readers or quasi-genuine readers such as Kant, T. H. Green, Brentano, Meinong, Husserl and Whitehead.

The problem with Hume is that he does not seem to be able to make up his mind if he is engaging in a radical philosophy in the style of Descartes or in a rational and experimental psychology.

The philosophy of Hume is radically incompatible with subsequent naturalism, so-called empiricism or logical positivism.

The philosophy of Hume is not compatible with the kind of relativism or skepticism exemplified by Sextus Empiricus (whom Hume most certainly read).  On the contrary Hume values highly evidence and rigorous proof.  Consider this beautifully embarrassing passage from part II of section II (Book I):

But here we may observe, that nothing can be more absurd, than this custom of calling a difficulty what pretends to be a demonstration, and endeavouring by that means to elude its force and evidence. It is not in demonstrations as in probabilities, that difficulties can take place, and one argument counter-ballance another, and diminish its authority. A demonstration, if just, admits of no opposite difficulty; and if not just, it is a mere sophism, and consequently can never be a difficulty. It is either irresistible, or has no manner of force. To talk therefore of objections and replies, and ballancing of arguments in such a question as this, is to confess, either that human reason is nothing but a play of words, or that the person himself, who talks so, has not a Capacity equal to such subjects. Demonstrations may be difficult to be comprehended, because of abstractedness of the subject; but can never have such difficulties as will weaken their authority, when once they are comprehended.

Goodbye Sextus !

Hume was forced to admit that there is a process of abstraction applied even to the most elementary, simple, indecomposable impressions like  colored points. Hume uses in various passages the expression under a certain light.

Suppose that in the extended object, or composition of coloured points, from which we first received the idea of extension, the points were of a purple colour; it follows, that in every repetition of that idea we would not only place the points in the same order with respect to each other, but also bestow on them that precise colour, with which alone we are acquainted. But afterwards having experience of the other colours of violet, green, red, white, black, and of all the different compositions of these, and finding a resemblance in the disposition of coloured points, of which they are composed, we omit the peculiarities of colour, as far as possible, and found an abstract idea merely on that disposition of points, or manner of appearance, in which they agree. Nay even when the resemblance is carryed beyond the objects of one sense, and the impressions of touch are found to be Similar to those of sight in the disposition of their parts; this does not hinder the abstract idea from representing both, upon account of their resemblance. All abstract ideas are really nothing but particular ones, considered in a certain light; but being annexed to general terms, they are able to represent a vast variety, and to comprehend objects, which, as they are alike in some particulars, are in others vastly wide of each other.  (part III, section II)

Finally, Hume has given us one of the most beautiful expressions of subjective idealism in the famous passage (end of section II):

We may observe, that it is universally allowed by philosophers, and is besides pretty obvious of itself, that nothing is ever really present with the mind but its perceptions or impressions and ideas, and that external objects become known to us only by those perceptions they occasion. To hate, to love, to think, to feel, to see; all this is nothing but to perceive. Now since nothing is ever present to the mind but perceptions, and since all ideas are derived from something antecedently present to the mind; it follows, that it is impossible for us so much as to conceive or form an idea of any thing specifically different from ideas and impressions. Let us fix our attention out of ourselves as much as possible: Let us chase our imagination to the heavens, or to the utmost limits of the universe; we never really advance a step beyond ourselves, nor can conceive any kind of existence, but those perceptions, which have appeared in that narrow compass. This is the universe of the imagination, nor have we any idea but what is there produced.

Thursday, January 16, 2025

Projects 2025

1. Extended Second-Order Logic as a general logic for philosophy

2. Ancient phenomenology (Sextus Empiricus, Plotinus, Hume, Kant, Brentano, Vasubandhu, 5 Nikayas)

3. On causality, computability and the mathematical models of nature

4. Biology from an abstract point of view

5. Ethics in Kant, Hegel and Schopenhauer 

(and continue paper on Analyticity and the A Priori)

Wednesday, January 8, 2025

Brentano's phenomenological idealism

Moreover, inner perception is not merely the only kind of perception which is immediately evident; it is really the only perception in the strict sense of the word. As we have seen, the phenomena of the so-called external perception cannot be proved true and real even by means of indirect demonstration. For this reason, anyone who in good faith has taken them for what they seem to be is being misled by the manner in which the phenomena are connected. Therefore, strictly speaking, so-called external perception is not perception. Mental phenomena, therefore, may be described as the only phenomena of which perception in the strict sense of the word is possible.

It is not correct, therefore, to say that the assumption that there exists a physical phenomenon outside the mind which is just as real as those which we find intentionally in us, implies a contradiction. It is only that, when we compare one with the other we discover conflicts which clearly show that no real existence corresponds to the intentional existence in this case. And even if this applies only to the realm of our own experience, we will nevertheless make no mistake if in general we deny to physical phenomena any existence other than intentional existence.

Franz Brentano, Psychology from an Empirical Point of View (1874)

Systems theory

To construct a model of reality we must consider what are to be considered the basic elements. Postulating such elements is necessary even if they are seen as provisory or only approximative, to be analysed in terms of a more refined set of basic elements.  A very general scheme for models involves distinguishing between time $T$ and the possible states of reality $S$ at a given time $t$. $T$ is the set of possible moments of time. Thus our model is concerned with the Cartesian product $S\times T$. In modern physics we would require a more complex scheme in which $T$ would be associated with a particular observer. It is our task to decompose or express elements of $S$ in terms of a set of basic elements $E$ and to use such a decomposition to study their temporal evolution.

The most general aspect of $T$ is that it is endowed with an order of temporal precedence $\prec$ which is transitive. We may leave open question whether $T$ with this order is linear (such as in the usual model of the real numbers) or branching. The most fundamental question regarding $T$ concerns the density properties of $\prec$. Is time ultimately discrete (as might be suggested by quantum theory) or is it dense (between two instants we can always find a third) or does it satisfy some other property (such as the standard ordering of ordinals in set theory) ? The way we answer this question has profound consequences on our concept of determinism.

For a discrete time $T$ we have a computational concept of determinism which we call strong determinism. Let $t$ be a given instant of time and $t'$ be the moment $t\prec t'$ immediately after $t$. Then given the state $s$ of the universe at time $t$ we should be able to compute the state $s'$ at time $t'$. If this transition function (called the state transition function) is not computable how can we still have determinism regarding certain properties of $s$ which we call weak determinism. Stochastic models also offer a weak form of determinism although a rigorous formalization of this may be quite involved. A very weak statement of determinism would be simply postulating the non-branching nature of $T$.

We can also consider a determinism which involves not the state in the previous time but the entire past history of states and having an algorithm which determines not only the next state but the states for a fixed number of subsequent moments. For instance the procedure would analyze the past history and determine which short patterns most frequently occurred and then yield as output one of these which the system would then repeat as if by "habit".

The postulate of memory says that the all the necessary information  about the past history is somehow codified in the state of the system in the previous time. For a dense time $T$ it is more difficult to elaborate a formal concept of determinism. In this case strong determinism is formulated as follows: given a $t$ and a state $s$ of the universe at $t$ and a $t\prec t'$ which is in some sense sufficiently close to $t$ we can compute the state $s'$ at $t'$. Models based on the real numbers such as the various types of differential equations are problematic in two ways. First, obtaining strong determinism, even locally, is problematic and will depend on having solutions given by convergent power series expansions with computable coeficients or on numerical approximation methods. Secondly, differential models are clearly only continuum-based approximations (idealisations) of more complex real systems having many aspects which are actually discrete. The determinism of differential models can be thus seen as based on an approximation of an approximation.

We now consider the states of the universe $S$. The most basic distinction that can be made is that  between a substrate $E$ and a space of qualities $Q$ . There is also an alternative approach such as the one of Takahara et al. based on the black box model in which for each system we consider the cartesian product $X\times Y$ of inputs $X$ and outputs $Y$. In this model we are lead to derive the concept of internal state as well as that of the combination of various different systems. We can easily represent this scenario in our model by simulating the input and output signalling mechanism associated to a certain subset of $E$. States of the universe are given by functions $\phi: E\times T \rightarrow Q$. We will see later that in fact it is quite natural to replace such a function by the more general mathematical structure of a "functor". To understand $\phi$ we must consider the two fundamental alternatives for $E$: the Lagrangian and Eulerian approaches (these terms are borrowed from fluid mechanics).

In the Lagrangian approach the elements of $E$ represent different entities and beings whilst in the Eulerian approach they represent different regions of space or some medium - such as mental or semantic space. This can be for instance points or small regions in standard Euclidean space. The difficulty with the Lagrangian approach is that our choice of the individual entities depends on the context and scale and in any case we have to deal with the problem of beings merging or becoming connected , coming to be or disappearing or the indiscernabiliy problem in quantum field theory. The Eulerian approach besides being more natural for physics is also very convenient in biochemistry and cellular biology where we wish to keep track of individual biomolecules or cells or nuclei of the brain. In computer science the Lagrangian approach could be seen in taking as basic elements the objects in an object-oriented programming language while the Eulerian approach would consider the variation in time of the content of a specific memory array.

We call the elements of $E$ cells and $\phi: E \times T \rightarrow Q$ the state function. For now we do not say anything about the nature of $Q$. In the Eulerian approach $E$ is endowed with a fundamental bordering or adjacency relation $\oplus$ which is not reflexive, that is, a cell is not adjacent to itself. The only axiom we postulate is that $\oplus$ is symmetric and each cell must have at least one adjacent cell. We have that $\oplus$ induces a graph structure on $E$. This graph may or not be planar, spatial or embeddable in $n$-dimensional space for some $n$.

We can impose a condition making $E$ locally homogeneous in such a way that each $e\in E$ has the same number of uniquely identified neighbours. For the case of discrete $T$, the condition of local causality states that if we are in a deterministic scenario and at time $t$ we have cell $e$ with $\phi(e) = q$ then the procedure for determining $\phi(e)$ at the next instance $t'$ will only need the information regarding the value of $\phi$ for $e$ and its adjacent cells at the previous instant. Many variations of this definition are possible in which adjacent cells of adjacent cells may also be included. This axiom is seen clearly in the methods of numerical integration of partial differential equations.

Now suppose that $T$ is discrete and that $E$ is locally homogeneous and that we indicate the neighbours of a cell $e$ by $e\oplus_1 e_1, e\oplus_2 e_2,...e\oplus_i e_i$. Then the condition forhomogenous local causality can be expressed as follows. For any time $t$ and cells $e$ and $e'$ such that $\phi(e,t) = \phi(e',t)$ and $\phi(f_i,t) = \phi(f'_i,t)$ ,where $f_i$ and $f'_i$ are the corresponding neighbours of $e$ and $e'$, we have that $\phi(e,t') = \phi(e',t')$ where $t'$ is the instant after $t$.

An example in the conditions of the above definition is that of a propagating symbol according to a direction $j$. If a cell $e$ is in state on and cell $e'$ such that $e\oplus_j e'$ is in state off then in the next instant $e$ is in state off and $e'$ is in state on. Stochastic processes such as diffusion can easily be expressed in our model.

A major problem in the Eulerian approach is to define the notion of identity of a complex being. For instance how biological structures persist in their identity. despite the constant flux and exchange of matter, energy and information with their environment.

We clearly must have a nest hierarchy of levels of abstraction and levels approximation and this calls for a theory of approximation. Some kind of metric and topology on $E$, $T$ and the functional space of functions $\phi$ is necessary. Note that all the previous concepts carry over directly to the Lagrangian approach as well. In this approach a major problem involves formalising the way in which cells can combine with each other to form more complex being. If we consider the example of biochemistry then we see that complex beings made up from many cells have to be treated as units well and that their will have their own quality space $Q'$ which will contain elements not possible to be realise by a single $e\in E$. This suggests that we need to add a new relation on $E$ to account for the joining and combination of cells and to generalise the definition of $\phi:E\times T \rightarrow Q$.

We take the Lagrangian approach. We now add a junction relation $J$ on $E$. When $e J e'$ then $e$ and $e'$ are to be seen as forming an irreducible being whose state cannot be decomposed in terms of the states of $e$ and $e'$. The state transition function must not only take into account all the neighbours of a cell $e$ but all the cells that are joined to any of these neighbours.

Let $J'$ be the transitive closure of $J$. Let $\mathcal{E}_J$ denote the set of subsets of $E$ such that for each $S\in \mathcal{E}$ we have that if $e,e' \in S$ then $e J' e'$. Inclusion induces a partial order on $\mathcal{E}$. Instead of $Q$ we consider a set $\mathcal{Q}$ of different quality spaces $Q$,$Q'$, $Q''$,...which represent the states of different possible combinations of cells. Let us assume that $Q$ represents as previously the states for single cells. For instance a combination of three cells will have states which will not be found in the combination of two cells or a single cell. Suppose $e$ and joined to $e'$ and the conglomerate has state $q \in Q'$. Then we can consider $e$ and $e'$ individually and there is function which restricts $q$ to states $q_1$ and $q_2$ of $e$ and $e'$. In category theory there is an elegant way to combine all this information: the notion of presheaf. To define the state functions for a given time $t$ we must consider a presheaf:
\[ \Phi_J: \mathcal{E}_J^{op} \rightarrow \mathcal{Q}\]
The state of the universe at given instant will be given by compatible sections of this presheaf. To define this we need to consider the category of elements $El(\mathcal{Q})$ associated to $\mathcal{Q}$ whose objects consists of pairs $(Q, a)$ where $a\in Q$ and morphisms $f:(Q,a) \rightarrow (Q',a')$ are maps $f:Q \rightarrow Q'$ which preserve the second components $f(a) = a'$. Thus a state function at a given time is given by a functor:
\[ \phi_J: \mathcal{E}_J \rightarrow El(\mathcal{Q}) \]
But $J$ can vary in time and we need a state transition function for $J$ itself which will clearly also depend on $\phi_J$ for the previous moment. Thus the transition function will involve a functor:
\[ \mathcal{J}_J: hom(\mathcal{E}_J , El(\mathcal{Q})) \rightarrow Rel(E) \]
and will yield a functor
\[ \phi_{\mathcal{J}_J(\phi_J)}: \mathcal{E}_{ \mathcal{J}_J(\phi_J)} \rightarrow El(\mathcal{Q}) \]
Note that we could also consider a functor
\[ \mathcal{E}: Rel(E) \rightarrow Pos \]
which associates $\mathcal{E}_J$ to each $J$.

The relation $J$ is the basic form of junction. We can use it to define higher-level complex concepts of connectivity such as that which  connects various regions of biological systems. We might define living systems as those systems that are essentially connected. These can be defined as systems in which the removal of any part results necessarily in the loss of some connection between two other parts. This can be given an abstract graph-theoretic formulation which poses interesting non-trivial questions. Finally we believe this model can be an adequate framework to study self-replicating systems.

Sunday, December 22, 2024

Some topics in the philosophy of nature

The relationship between the concepts of determinism, predetermination, computability, cardinality, causality and the foundations of the calculus. To study this we need a mathematical general systems theory, hopefully general enough for this investigation. 

It is clear that 'determinism' is a very complex and ambiguous term and that it only has been given rigorous sense in the case of systems equivalent to Turing machines which are a case of finite or countably infinite systems.  Note that there are finite or countably infinite systems which are not computable and hence not deterministic in the ordinary sense of this term. Thus this sense of determinism implies computability which in turn implies that to determine  the evolution of the system we need consider only a finite amount of information involving present or past states. And we should ask how the even more complex concept of 'causality' comes in here. What are we to make of the concept of causality defined in terms of such computable determinism ? Note that a system can be considered deterministic in a metaphysical sense without being in fact computable.

A fundamental problem is understanding the role of differential (and integral) equations in natural science and the philosophy of nature.  The key aspect here is:  being an uncountable model and the expression of causality in a way distinct from the computational deterministic model above.  Note the paradox: on one hand 'numerical methods' are discrete, computable, deterministic approximations of differential models. One the other hand the differential models used in science are clearly obtained as approximations and idealizations of nature, for instance in the use of the Navier-Stokes equations which discards the molecular structure of fluids.

One problem is to understand the causality and determinism expressed in differential models in terms of non-standard paradigms of computation beyond the Turing limit. One kind of hypercomputational system can be defined as carrying out a countably infinite number of computational steps in a finite time.

For a mathematical general systems theory we have considered two fundamental kinds of systems: these are transpositions to generalized cellular automata/neural networks  of the Eulerian and Lagrangian approaches  to fluid mechanics.  It is clearly of interest to consider non-countable and hypercomputational versions of such general cellular automata: to be able to express differential models in a different way and to generalize them by discarding the condition of topological locality (already found in integral-differential equations and the convolution operation, Green's function, etc.).

The deep unsolved problems regarding the continuum are involved here as well as their intimate connection to the concepts of determinism, causality, computability and the possibility of applying differential models to nature. 

A special case of this problem involves a deeper understanding of all the categories of functions deployed in modern analysis: continuous, smooth, with compact support, bounded variation, analytic, semi- and sub-analytic, measurable,  $L^p$, tempered distributions, etc. How can 'determinism' and even computability be envisioned in models based on these categories?

What if nature was ultimately merely measurable rather than continuous ? That is, the temporal evolution of the states of systems modeled as a function $\phi: T \rightarrow S$ must involve some kind of merely measurable map $\phi$ ? Our only 'causality' or 'determinism' then must involve generalized derivatives in the sense of distributions. And yet the system can  still be deterministic in the metaphysical sense and even hypercomputational in some relevant sense. Or maybe such maps are generated by sections of underlying deterministic continuous processes ? 

General determinism and weak causality involve the postulating of properties of the evolution of the system which may not be logically or computationally sufficient to predict the evolution of the system in practice. This is similar to the situation in which given a recursive axiomatic-deductive system we cannot know in practice if a given sentence can be derived or not. Also constructions like the generalized derivative of locally integrable functions involve the discarding a much information.

For quantum theory: actual position and momentum are given by non-continuous measurable functions over space-time (we leave open the question of particle or wave representations). The non-continuity implies non-locality which renders, perhaps, the so-called 'uncertainty principle' more intelligible. The wave-function $\psi$ is already a kind of distribution or approximation containing probabilistic information. Quantum theory is flawed because the actual system contains more information than is embodied in the typical wave-function model - a situation analogous to the way in which the generalized derivative involves discarding information about the function.

Uncertainty, indeterminism, non-computability are a reflection thus not of nature itself but of our tools and model-theoretic assumptions. In the same way it may well be that it is not logic or mathematics that are 'incomplete' or 'undecidable' but only a certain paradigm or tool-set that we happen to choose to employ.

Another topic: the study of nature involves hierarchies of models which express different degrees and modes of approximation or ontological idealization - but forcefully ordered in a coherent way. Clearly the indeterminism or problems of a given model at a given level arise precisely from this situation; small discrepancies at a lower level which have been swept under the rug can in the long-run have drastic repercussions on higher-level models, even if most of the times they can be considered negligeable. And we must be prepared to envision the possibility that such hierarchies are imposed by the nature of our rationality itself as well as by experimental conditions - and that the levels may be infinite.

Computation, proof, determinism, causality - these are all connected to temporality, with the topology and linear order of time and a major problem involves the uncountable nature of this continuum.

In mathematical physics we generally have an at least continuous function from an interval of time into Euclidean space, configuration space or a space-time manifold. This to describe a particle or system of particles. More generally we have fields (sections of finite dimensional bundles) defined on such space which are in general at least continuous, often locally smooth or analytic.  This can be generalized to distributions, to fields of operators or even operator valued distributions.  But what if we considered, at a fundamental level,  movements and fields which were merely measurable and not continuous (or only section-wise continuous) ? Measurable and yet still deterministic. Does this even make sense ? At first glance 'physics' would no longer make sense as there would no longer be any locality or differential laws. But there still could be a distribution version of physics and a version of physics over integrals. If the motion of a particle is now an only measurable (or locally integrable) function $\phi: T \rightarrow \mathbf{R}^3$. Consider a free particle. In classical physics if we know the position and momentum at a given time then we know the position (and momentum) at any given time (uniform linear movement). But there is no canonical choice for a non-continuous function. Given a measurable functions $f: T \rightarrow \mathbf{R}^3$ we can integrate and define a probability density $\rho: \mathbf{R}^3 \rightarrow P$ which determines how frequently the graph of $f$ intersects a small neighbourhood of a point $x$. But what are we to make of a temporally evolving $\rho$ (we could consider a rapid time, at the Planck scale and a slow time) ?

Tentative definition of the density function:

\[  \rho_f (x) =   \lim_{x \in U}\frac{\mu(f^{-1} U)}{m(U)} \]

where $\mu$ is a Borel measure on $T$ and $m$ the Lebesgue measure on $\mathbf{R}^3$. Question: given a continuous function $g : {R}^n \rightarrow \mathbb{K}$ where $\mathbb{K}$ is either the real of complex functions and a  (signed) Borel measure $\mu$ on $T\subset \mathbf{R}$ is there a canonical measurable non-continuous functions $f :T \rightarrow \mathbf{R}^n $ such that $\rho_f = g$ ? It would seem not.  Any choice among possible 'random' candidates implies extra information.  And we need to make sense of this question for continuous families $g_t$ of continuous functions, for example $g(t) = e^{i\pi t}$. The differential laws of $g_t$ might need to be seen as finite approximations.

Define real computable process. 

Another approach: we have a measurable map $f: T \rightarrow A \times B \rightarrow$. Suppose that we know only $f_A(t_0)$ and not $f_B(t_0)$ while the knowledge of both would be theoretically enough to compute $f(t)$ for $t > t_0$.  Then given a $U \subset A \times B$ we can take the measure of the set $V \subset B$ such that if $f_B(t_0) \in V$ then $f(t) \in U$.  

If a trajectory is measurable and not continuous, does velocity or momentum even make sense ? 

For $f : T \rightarrow \mathbf{R}^3$ measurable (representing the free movement of a single particle) we can define for each $I \subset T$, $\rho_I (x) =   \lim_{x \in U}\frac{\mu(f^{-1} U) \cap I}{m(U)}$ which can be thought of as a generalized momentum but where causality and temporal order are left behind. Thus we could assign to each open interval $I \subset T$ a density function $\rho_I: \mathbf{R}^3 \rightarrow \mathbb{K}$. We can then postulate that the variation of the $\rho_I$ with $I$ is continuous  in the sense that given a $\epsilon$ we can find a $\delta$ such that for any partition $I_i$ of $T$ with $d(I_i) < \delta$ we have that $|| \rho_{I_{i+1}} - \rho_{I_i}|| < \epsilon$ for some suitable norm.

This construction can be repeated if we consider hidden state variables for the particle, that is $f : T \rightarrow \mathbf{R}^3 \times H$ for some state-space $H$. Of course we cannot in practise measure $H$ at a given instant of time for a given particle.  Note also that if we have two measurable maps then indiscernibility follows immediately - individuation is tied to continuity of trajectories.

Space-time is like an fluctuating ether which induces a Brownian-like motion of particle - except not continuous at all only measurable. Maybe it is $H$ that is responsible for directing the particle (a vortex in the ether) and making in behave classically is the sense of densities.  

A density function (for a small time interval) moving like a wave makes little physical sense. Why would the particle jump about in its merely measurable trajectory and yet have such a smooth deterministic density function ? It is tempting to interpret the density function as manifesting some kind of potential - like a pilot wave.

The heat equation $\partial_t = k\partial^2_x$ represents a kind of evening out of a function $f$, valleys $f''>0$ are raised and hills $f''< 0$ are leveled. But heat is a stochastic process. Maybe this provides a clue to understand the above - except in this case there is only one very rapid particle playing the role of all the molecules. 

Another approach: given a continuous function in a region $\phi: U \rightarrow \mathbf{R}^+$ construct an nowhere continuous function $\tau :T \rightarrow U$ such that $\phi$ is the density of $\phi$ in $T$. This is the atomized field. The Schrödinger equation is an approximation just like the Navier-Stokes equation ignoring the molecular structure of fluids.

Newton' first law of motion expresses minimality and simplicity for the behaviour of a free particle.  We can say likewise that an atomized field if free is completely random spread out uniformly in a region of space. As yet without momentum. Momentum corresponds to a potential which directs and influences the previously truly free atomized field.  Our view is that a genuinely free particle or atomized field is one in which the particle has equal probability of being anywhere (i.e. it does not have cohesion, any cohension must be the effect of a cause). Thus Newton's free particle is not really free but a particle under the influence of a directed momentum field. There are rapid processes which create both the atomized field (particle) and field.

Why should we consider a gravitational field as being induced by a single mass when in reality it only manifests when there are at least two ? 

In Physics there are PDEs which are derived from ODEs of physics at a more fundamental level and there are PDEs that are already irreducibly fundamental.

A fundamental problem in philosophy: the existence of non-well posed problems (in PDEs), even with smooth initial conditions.  This manifests not so much the collapse of the differential model of determinism but the essentially approximative nature of the PDE modelling. Philosophically the numerical approximation methods and the PDEs might be place on equal grounds. They are both approximations of reality.

Weak solutions of PDEs are in general not unique. Goodbye determinism. Entropy.


Monday, September 9, 2024

New logical investigations

Let us face it. We know and understand very little about the 'meaning' of such homely terms as 'water' (mass noun). Meaning is not 'inscrutable' just very complex and has not been investigated with complete candor or penetrating enough insight.

A linguistic segment may acquire individual additions or variations of meaning depending on linguistic context  (there is no water-tight segmentation) and yet still contain a certain invariant meaning in all these cases - all of which cannot be brushed away under the term 'connotation'.  For instance compare the expressions 'water is wet', 'add a little water' and 'the meaning of the term 'water''. 

This is clearly related to psychologism and its problems and the inter-subjective invariance of meaning.

In literary criticism there is actually much more linguistic-philosophical acumen, for example in asking 'what does the term X mean for the poet' or 'explain the intention behind the poet's use of the term X'.

Let us face it. Counterfactuals and 'possible worlds' if they are no make any sense at all demand vastly more research and a more sophisticated conceptual framework. We do not know if there could be any world alternative (in any degree of detail) to the present one.  The only cogent notion of 'possible world' is a mathematical one or one based on mathematical physics. There is at present no valid metaphysical or 'natural' one - or one not tied to consciousness and the problem of free-will. 

Given a feature of the world we cannot say a priori that this feature could be varied in isolation in the context of some other possible world. For instance imagining an alternative universe exactly like this one except that the formula for water is not H2O is not only incredibly naive but downright absurd.

Just as it is highly problematic that individual features of the world could vary in isolation in the realm of possibility so too is it highly problematic that we can understand the 'meaning' of terms in isolation from the 'meaning' of the world as a whole.

There is no reason not to consider that there is a super-individual self (Husserl's transcendental ego or Kant's transcendental unity of apperception ) as well as a natural ego in the world.  What do we really know about the 'I', the 'self' , all its layers and possibilities ? The statement 'I exist'  is typically semantically complex and highly ambiguous. But it has at least one sense in which it cannot be 'contingent'. Also considerations from genetic epistemology can lead to doubt that it is a priori.  

There are dumb fallacies which mix up logic and psychology, ignore one of them, artificially separate them or ignore obvious semantic distinctions. And above all the sin of confusing the deceptively simple surface syntax of natural language with authentic logical-semantic structure ! For instance: 'Susan is looking for the Loch Ness Monster' and 'Susan is looking for her cat'.  It is beyond obvious that the first sentence directly expresses something that merely involves Susan's intentions and expectations whilst the second sentence's most typical interpretation involves direct reference to an actual cat. The two sentences are of different types.

We live in the age of computers and algorithms.  Nobody in their right mind would wish to identify a 'function' with its 'graph' except in the special field of mathematics or closely connected areas. If we wish to take concepts as functions (or take functions from possible worlds to truth values) then obviously their intensional computational structure matters as much as their graphs. Hence we bid fair-well to the pseudo-problems of non-denoting terms.

Proper names are like titles for books we are continuously writing during our life - and in some rare cases we stop writing and discard the book. And one book can be split into two or two books merged into one.

It is very naive to think that in all sentences which contain so-called 'definite descriptions'  a single logical-semantic function can be abstracted.  We must do away with this crude naive abstractionism and attend to the semantic and functional richness of what is actually meant without falling into the opposite error of meaning-as-use, etc.

For instance 'X is the Y' can occur in the context of learning: a fact about X is being taught and incorporated into somebody's concept of X. Or it can be an expression of learned knowledge about of X: 'I have been taught or learned that X is the Y' or it can be an expression of the result of an inference : 'it turns out that it is X that is the Y'. Why must all of this correspond to the same 'proposition' or Sinn ?

Abstract nouns are usually learnt in one go, as part of linguistic competence, while proper names reflect as evolving, continuous, even revisable learning process. Hence these two classes have different logical laws.

The meaning of the expression 'to be called 'Mary'' must contain the expression 'Mary'. So we know something about meanings ! 

How can natural language statements involving dates be put into relationship to a events in a mathematical-scientific 'objective' world (which has no time or dynamics) when such dates are defined and meaningful only relative to human experience ? What magically fixes such a correspondence ? This goes for the here and now in general ? What makes our internal experience of a certain chair correspond to a well-defined portion of timeless spatial-temporal objectivity ?

What if most if not all modern mathematical logic could be shown to be totally inadequate for human thought in general and in particular philosophical thought and the analysis of natural language ? What if modern mathematical logic were shown to be only of interest to mathematics itself and to some applied areas such as computer science ? 

By modern mathematical logic we mean certain classes of symbolic-computational systems starting with Frege but also including all recent developments. All these classes share or move within a limited domain of ontological, epistemic and semantic presuppositions and postulates.

What if an entirely different kinds of symbolic-computational systems are called for to furnish an adequate tool for philosophical logic, for philosophy, for the analysis of language and human thought in general ? New kinds of symbolic-computational systems based on entirely different ontological, epistemic and semantic postulates ? 

The 'symbols' used must 'mean' something, whatever we mean by 'meaning'. But what, exactly ? Herein lies the real difficulty. See the books of Claire Ortiz Hill.  It is our hunch that forcing techniques and topos semantics will be very relevant.

However there remains the problem of infinite regress: no matter how we effect an analysis in the web of ontology, epistemology and semantics this will always involve elements into which the analysis is carried out. These elements in turn fall again directly into the scope of the original ontological, epistemology and semantic problems. 

If mathematics, logic and philosophy have important and deep connections in was perhaps the way that these connections were conceived that were mistaken. Maybe it is geometry rather than classical mathematical logic that is more directly relevant to philosophy.

What if a first step towards finding this new logic were the investigation of artificial ideal languages (where we take 'language' in the most general sense possible) and the analysis of the why and how they work as a means of communications.

Consider an alien race that only understood first-order logic. How would we explain the rules of Chess, Go or Backgammon ? And how do we humans understand and learn the rules of these games when their expression in first-order logic is so cumbersome and convoluted and extensive ?  Expressing them in a programming language is much simpler...perhaps we need higher-level languages which are still formal and can be reduced to lower-languages as occasion demands. How do we express natural language high-level game concepts, tactics and strategy, in terms of low-level logic ?

Strange indeed to think that merely recursively enumerable systems of signs can represent or express all of reality...how can uncountable reality ever be capture with at most countable languages (cf Löwenheim-Skolem theorems, the problems with categoricity, non-standard analysis, etc.) ? 

All mathematical logic - in particular model theory - seems to be itself to presuppose that it is formalizable within ZF(C). Is this not circular ?  Dare to criticize standard foundations, dare to propose dependent type theory, homotopy type theory, higher topos theory as alternative foundations. 

The Löwenheim-Skolem theorems cannot be used to argue for the uncertainty or imprecision of formal systems because, for instance (i) these results are focused on first-order logic and the situation for second and higher-order logic is radically different (for instance with regard to categoricity). (ii) according to the formal verification principle these metatheorems themselves have to be provable in principle in a formal metasystem. If we do not attach precise meaning to the symbols and certainty to the deductive conclusion in the metasystem what right have we to attach any definite meaning or certainty to the Löweinhem-Skolem theorems themselves ?  

But of course the formal verification principle needs to formulated with more precision for obviously given any sentence in a language we can always think of a trivial recursive axiomatic-deductive system in which this sentence can be derived.  The axiomatic-deductive systems has to satisfy properties such as axiomatic-deductive minimality and optimality and type-completeness, i.e., it must capture a significantly large quantity of true statements of the same type - the same 'regional ontology'. Also the axioms and primitive terms must exhibit a degree of direct, intuitive obviousness and plausibility. And the system must ideally be strong enough to express the 'core analytic' logic.

The formal mathematics project might well be the future of mathematics itself.

The problems of knowledge: either we go back to first principles and concepts, the seeds, but loose the actual effective development, unfolding, richness, life - and also having to bear in mind that the very choice of principles might have to change according to goals and circumstance -  or else we delve into the unfolding richness of science but become lost in the alleys of specialization and limited, partial views.  Either we are too far away to see detail and life or we are too close to see anything but a small part and miss the big picture. Also when we are born into the world 'knowledge' is first forced onto us, there is both contingency and necessity. It is only later that we review what we learnt.  A great step is when we step back to survey knowledge itself, attempt to obtain knowledge about knowledge, to criticize knowledge. Transcendental knowledge is not the same as the ancient project of 'first philosophy'.

If we take natural deduction for first-order logic and assume the classical expression of $\exists$ in terms of $\forall$, then we do not need the natural deduction rules for $\exists$ at all. This can be used as part of my argument related to ancient quantifier logic.  Aristotle's metalogic in the Organon is second-order or even third-order.

Overcoming the categories and semantics - or rather showing their independence and holism. With this theme we can unite such disparate thinkers as Sextus, Nâgârjuna, Hume and Hegel - and others to a lesser extent (for instance Kant). Notice the similarity between the discussion of cause in Sextus, Nâgârjuna and Hegel. The difference is that Sextus aims for equipollence, Nâgârjuna to reject all the possibilities of the tetralemma while Hegel continuously subsumes the contradictions into more contentful concepts hoping thereby to ladder his way up to the absolute. And yet how pitiful is the state of logic as a science....once we move away from classical mathematics and computer science.  The idea of a formal mathematical logic (or even grammar) adequate for other domains of thought, remains elusive ! 

We can certainly completely separate the content and value of Aristotle's Organon and Physics from Aristotle's politics and general world-view. Can we do this for Plato too ? 

Cause-and-effect. The discrete case. Let $Q$ denote the set of possible states of the universe at a given time and denote the state at time $t$ by $q(t)$. Then this will depend on the set of previous values of $t$. Thus determinism is expressed by  functions $f_t: \Pi_{t' < t} Q \rightarrow Q$. Now suppose that $Q$ can be decomposed as $S^B$ where $B$ represents a kind of proto-space and $S$ local states for each element of $b\in B$ (compare the situation in which an elementary topos turns out to be a Grothendieck topos).  Now we can ask about the immediate cause of the states of certain subsets of $B$ at a time $t$ - that is the subset of $B$ who variation of state would change the present state.  But a more thorough investigation of causality must involve continuity and differentiability in an essential way. Determinism, cause-and-effect depend on the remarkable order property of the real line and indeed on the whole problem of infinitesimals...

The problem with modern physics is that it lacks a convincing ontology. Up to now we have none except the division into regions of space-time and their field-properties. Physics should be intuitively simple. But all ontologies are approximative only and ultimately confusing.

Does Lawvere's theory of quantifiers as adjoints allow us to view logic as geometry ? $\exists$ corresponds to projection and $\forall$ to containment of fibers. Let $\pi: X \times Y \rightarrow X$ be the canonical projection and let a geometric object $P \subset X\times Y$ represent a binary predicate. Then $\exists y P(x,y)$ is represented by the predicate $\pi(P) \subset X$ and $\forall y P(x,y)$ is represented by $\{x \in X: \pi^{-1}(x) \subset P\}$. For monadic predicates we use $\pi: X \rightarrow \{\star\}$ so that for $P \subset X$ we have that $\exists x P(x) = \{\star\}$ corresponds to $P$ being non-empty and $\forall x P(x) = \{\star\}$ corresponds to $P = X$. Combining this we see that $\forall x \exists y P(x,y)$ corresponds to $\pi(P) = X$ and $\exists x \forall y P(x,y)$ corresponds to $P$ containing a fiber $\pi^{-1}(x)$. Exercise: interpret the classical expression of $\forall$ as $\neg\exists\neg$ geometrically.  Conjunction is intersection, disjunction is union. What is the geometrical significance of classical implication $P \rightarrow Q$ as $P^c \cup Q$ (for monadic predicates). This is only $X$ if $P \subset Q$. So it measures how far we are away from the situation of containment.

We have meaning M and project it to a formal expression E in a system S. Then we apply mechanical rules to E to obtain another formal expression E'. Now somehow must be able to extract meaning again from E' to obtain a meaning M'.  But how is this possible ? Reason, argument, logic, language - it is all very much like a board-game. The foundations of mathematics: this is the queen of philosophy.

Jan Westerhoff's book on the Madhyamaka, p. 96.  I fail to see how the property "Northern England" can depend existentially on the property "Southern England".   Because conceptual dependency only makes sense relative to a formal system.  I grant  B may be a defined property and A's definition may explicitly use B. But why can't we just expand out B in terms of the primitives of the formal system in use ? And what does it even mean for two concepts to be equal ? What are we doing when replacing a concept by its definition (and Frege's puzzle, etc.) ?  

A must read: Hayes' essay on Nâgârjuna. Indeed svabhava is both being-in-self and being-for-itself !

T.H. Green on Hume is just as good as anything Husserl or Frege wrote against psychologism or empiricism.

René Thom: quantum mechanics is the intellectual scandal of the 20th century. An incomplete and bad theory  that includes the absolutely scientifically unacceptable nonsense of the 'collapse of the wave-function'. 

Bring genetic epistemology (child cognitive development) into the foreground of philosophy. Modify Husserl's method into a kind of phenomenological regression.

When we say 'we' do we mean I +  he/she or they  - or something different ? 

There is a formal logico-mathematical perfection in Plato's earlier dialogues. It is where Aristotle, perhaps, got much of his Topics.

 If A and B are decidable predicates then $A \subset B$ need not be. This is important. 

The effective topos - uniform fibration - all this goes back to understand predicate logic after propositional logic is understood intensionally in terms of realizability. A proposition means all the ways it is computationally realized. Note that there has to be various ways because of disjunction. This is purely intensionality.  So a proposition's meaning is a subset of $\mathbb{N}$. But does this subset have to be itself computable ? Predicates are  $\mathcal{P}\mathbb{N}^X$.

Agda is the the best proof assistant. Predicates are just fibered types $X \rightarrow Set$. Agda is pure combinatoric 'lego' logic. Elegant, simple, powerful, flexible.

Zalta's encoding, the fusion of properties into an abstract individual object - this is a benign form of self-reflection or return-to-self whereby a property may be predicated of itself in second-order logic.

What does it mean to know something ? For instance the term 'man' is part of linguistic communities.  But how can knowing a definition have much to do with scientific knowledge in the modern sense ? And how do we account for meaning of such terms across possible worlds ? The problem is even harder for individuals which are the referents of proper names. The term 'man' would seem to conceal an open-ended horizon of facts and knowledge as indeed the term 'animal'.  But perhaps invariant under the increase of knowledge in the semantic scope of the terms is the relation between the two terms. The ancient knowledge of definitions was thus the knowledge of the invariant relation between epistemically open terms. But of course the 'difference' employed can itself be open-ended and capable of extension and refinement, but the whole idea is that is should be simpler and more stable than the genus and the species.

Difference between 'concept' and 'meaning'.  When somebody says 'man !' clearly the mental content invoked is not the sum total of one's epistemic domain related to this term - one's concept of man. Rather it is a minimal relevant 'sketch'  (and this can be ascertained by the phenomenological method). Perhaps like 'pointers' in C. Something similar must be happening for proper names. Indeed the whole problem of proper names is related to individual essences and gets tangled with the problem of determinism.  Maybe 'sketches' are like definite descriptions...which only point to a more complete concept.

The Halting problem is undecidable.  Suppose we had a machine with code $u$ such that given the input of the code  $e$ of a machine and input $f$ could tell if $\{e\}f \downarrow$. Now consider the machine with code $d$ which given an input $x$  computes as follows: if $\{x\}x \downarrow$ then it never stops otherwise it does.   Does $\{d\}d$ stop ? If it does that means that $\{d\}d$ must never stop (contradiction). If it doesn't that means that by definition of $d$ is does. Hence machine $u$ cannot exist. For term-rewriting systems: there is no term rewriting system U which can be interpreted as giving the answer with regards to the derivability of a word W starting from word S with rules R.

Globality: a function $f$ may be continuous on two disjoint (clopen) sets $A$ and $B$ of real numbers but fail to be so on $A \cup B$. The definition of being continuous on a boundary point is problematic.

A postulate of pure reason: there is a term rewriting system T and term rewriting system S such that a derivation in T is taken as certain knowledge that a certain word cannot be derived in S.

Two visions of the absolute: the plurality of mutually and self-reflecting and interpreting and meta-interpreting axiomatic-deductive systems and the systems theoretic view of a plurality of learning dynamic communicating interaction systems which represent the whole within themselves (representation).  Thus we have the paradigm of deductive or proof systems for the first vision and computational systems for the second, though by 'computational' system we include non-standard paradigm beyond the Turing limit as well as systems inspired by biology and by consciousness. And yet it is through axiomatic-deductive systems that such systems are known and understood.

The whole and the part. How parts are organized into the whole via a relation between parts. To be able to compare and identify parts - thus to seize the type of a part and differentiate between its instances. To be able to change the whole through replacement of a particular part.

topos-HOL with reflection/representation: $enc: [ I, [I]]$.

Absolutism allows the validity of a relative relativism relative to an absolute canvas. As proof it only requires that there be some absolute absolutely knowable truth - like core arithmetic and human and animal rights -  which are then the required framework for all further meaningful relative perspectives and action.  Relativism on the other hand cannot tolerate there being any absolute whatsoever; furthermore by doing so it itself becomes a form of absolutism and thus implodes.

Systems theory in ancient philosophy:  when genus, property or definition depend essentially on interaction and relation.  The axiomatic-deductive vs. the systems theory view. And so much better the analytic and semi-analytic from a logical perspective.  This is the way to do differential equations.

What are the scientific theories which are strictly necessary for the design and manufacture of the most important technology ? And what mathematics would be strictly necessary for these scientific theories ? And are not all our functions analytic with computable coeficients - and how should we view numerical methods philosophically ?

The correct foundations for the calculus and theory of the continuum is still an open problem. We must get rid of the bad influence of ZFC foundations. Does it make sense to say that a recursive axiomatic-deductive system can 'grasp' the continuum ? We need to investigate and promote alternative foundations based on dependent type theory and category theory.

Mathematics is primarily relevant and valuable in the following aspects: i) for application and deployment in science including the clarification of the essential optimal structure of the relevant deployment and ii) as a preparation and antechamber to pure logic and philosophy - with particular emphasis on computability.    The other kind of mathematics for its own sake, without any regard for logic and philosophical foundations and proof-theoretic and conceptual optimality,  while legitimate is certainly overrated by society and certainly should not be set up as a paradigm of human 'intelligence'. The same goes for theoretical physics which has no contact with experimentation, empirical evidence or practical applications - the same goes for logically and conceptually radically incomplete or inconsistent theories.

Physics lacks an ontology. Its usual ontology is merely derived, accidental and approximative.  Consider a computer as a huge cellular automaton (CPU + RAM + storage and peripherals). How can we justify at a low level the abstraction to higher-level data structures and processes ? For instance if aliens observed the working of a computer at a low level ?

A challenge to the genus + difference template. What if we take grandfatherhood as a species of the genus family relation ? But all family relations depend on their definition on the primitive relations of fatherhood and motherhood - which are better known than other relations. 

Philosophy is something that must always keep beginning again, beginning from the beginning. But what could such a beginning be ?  Either a formal game with rules or a describing merely what is, free from suppositions (like a doctor observes symptoms).  In the first case we face the problem of the assigning of meaning to the pieces and the rules, in the second case: is it really possible to free ourselves from presuppositions and must not the description itself depend on language ? The extreme objectivity of phenomenology paradoxically becomes extreme subjectivism.

From the certain knowledge of the moral law we deduce the existence of other sentient beings. The law implies the possibility of their existence. Assuming the appearances of such beings are real can only be good. Assuming they are not real could be disastrous and bad. Hence the reality of other apparent sentient beings is a basic postulate of practical reason. Also we have an argument from the reality of mathematical concepts. Since natural world appearances participate and consistently conform to mathematics it is reasonable and plausible to assign to them some degree of reality. 

Kant in the A-version of the transcendental deduction of the pure concepts of the understanding. One passage seems to suggest an argument that can be paraphrased as pointing out that Hume's account of laws is self-defeating.  It is no good to say that our knowledge of laws comes from frequent association of certain phenomena for this itself is just stating an alleged psychological law about the human mind which itself in turn must thus be just such an inductive generalization or habit - which in fact is contradicted by experience as Schopenhauer pointed out regarding the regular succession of day and night.

In english the indefinite article is sometimes used in a definite sense: everybody has a father.

Perhaps time is already a kind of consciousness and memory of the world. Every instant the universe disappears and is replaced by a similar universe.  How can 'the' universe be real ? Which universe is 'the' universe ? Point to it (cf. the dialectic of sense certainty in the Phenomenology of Spirit).  Just as cells in living organisms are replaced, the organism is a kind of abstraction, a structured wake against the continuous flow of matter, energy and information, so too time represents the living recycling process of the universe. Time must carry information.  Everything is ultimately  a concept, but what is a concept ?

Add to discussion on Measure: well-posed problems in PDE, the continuous dependence on initial condition fails.

Thursday, July 25, 2024

On the Field-only Approach to Quantum Field Theory

This post consists in  only some incomplete sketches and is obviously very tentative.

René Thom called quantum mechanics 'the greatest intellectual scandal of the 20th century'. Maybe this was too harsh, but quantum theory was meant originally just as to be crude provisional proto-theory destined to give place to something to better (which has not...due to political, military, economic and industrial reasons ?). Consider the double-slit experiment. The 20th century was also the century of dynamical systems and chaos theory. It is clear to us that the random aspect of the double-slit experiment must be explained in light of chaos theory, thus of an underlying deterministic system. In a classical setting there will also be a pseudo-random aspect for particles traversing the two slits (but without the interference pattern). Nobody would think of interpreting this as a probabilistic collapse of a wave-function. In the non-classical situation it would occur to almost anyone to see the wave-function as a real physical field associated to particles (a "pilot-wave"). If we rule out local hidden variables (but do we really need to ?), then we are lead to non-local yet deterministic non-linear systems which generate the pseudo-random phenomena of quantum theory in the standard way of chaotic dynamics. Even numerous colliding perfectly elastic particles is a deterministic system which yields Brownian motion. To do: study the argument involving single photons and half-silvered and full-silvered mirrors described in Penrose's The Emperor' New Mind p.330 (1st edition).  Both the photon and the wave-function are real existing physical entities and the randomness of the reflection can be given an underlying deterministic explanation. Some wave-packets are empty of particles yet still have physical meaning. We could also consider space as being like a Poincaré section for some higher-dimensional continuous dynamic. Of course there is an easy objection to our proposal: what about maximally delocalised solutions for the free particle Schrödinger equation ? Due to many other difficulties we could also take A. Hobson's approach that  'there are no particles, only fields'.  Here small-scale irregularities, the fact that we are dealing with approximations, etc. could well explain the 'collapse of the wave function' - if we postulate that quantum fields are to have here an intrinsic holistic nature so that their localized interaction around the boundary of their support entails an immediate (or very fast) alteration of the entire field (Hobson gives the analogy to popping a balloon). In the double-slit experiment if we think of the wave-function as a single entity then in reality only one small portion of the wave-front will hit the screen first - which will be determined by sensitivity to initial conditions and many perturbations and irregularities in the instruments involved in the experiment. This, based on Hobson's own analysis, could furnish the missing piece to eliminate any appeals to probability, even in a field-only interpretation. The $|ready >$ state is itself complex and fluctuating (deterministically). Thus the pseudo-randomness of which $A_i$ region will effect the 'pop'. However there seems perhaps to be a difficulty in interpreting the apparently random aspect of the experiment above discussed by Penrose (it would suggest that the result of  'popping the balloon' must still be considered random).  But is this experiment really so different from the double-slit one ? We need to find the inner geometric deterministic dynamics of field interactions that could account for this behaviour. Maybe use the fact of the interference of the environment (and entanglement) in all experimental conditions. 

 

Hobson interprets $|\Psi|^2$ as the probability of interaction of the field. We need to add an extra dimension to $\Psi$  and an accompanying deterministic non-linear dynamic field-process (as in nonlinear PDEs) which explains the resulting interaction probabilities in a totally deterministic way. This is where chaos theory is the key to quantum theory. This applies to interaction and to spin-measurement. Consider the classical orbitals of the Hydrogen atom. Some have nodal points which seem to rule out a particle interpretation. Also spin basically involves extending the phase space of the original wave function, for instance for a single particle $L^2(X, \mu) \otimes \mathbb{C}^2$. Thus our proposal is not surprising. On the other hand if we consider the orbitals of the Hydrogen atom is seems natural that they should posses also some kind of dynamic nature in an extended dimension (analogous to spin) related to the amplitude of the original wave function.

In the Penrose experiment considered above consider the detectors in two distant locations in which each spin configuration has a 1/2 percent chance and in which the two measurements are always correlated. We view the electron wave as a single entity even if divided into two packets. The unity is expressed in the phase in the extended dimension which oscillates not as two independent oscillators (one for each packet) but as a single oscillator, thus guaranteeing the correlation of the measurements. 

A model: a packet could have a phase oscillating between UP and DOWN  in the extended dimension which determines the measurements (interaction probabilities). But two coupled packets would oscillate between UP x DOWN and DOWN x UP globally.

Consider two distinct localized wave packets  $P_1$ and $P_2$ centered around points $-x$ and $x$ for $x > 0$ larger than their wave-length. If $P_1$ moves forward and $P_2$ moves backwards so that they exchange places then the resulting quantum state and hence the physical state of the system will be exactly the same as it was initially. Thus the "indiscernibility of identicals" follows immediately from the field-only approach while it is problematic for the particle approach.

Monday, June 17, 2024

Miscellany of philosophical observations

1. Quantum theory gave us the idea of introducing negative probabilities, i.e. signed measures. 

2. Category theory is intensional (non-extensionalist) mathematics based on minimal logic, thus hyper-constructive.  We ask about a natural number object (the concept of an 'element'  is not taken as a primitive; rather we have only generalized elements $1 \rightarrow A$) in a given category, that is, about its universal property;  we construct concrete generalized element 'numbers' through composition of primitive morphisms $z : 1 \rightarrow N$ and $s : N\rightarrow N$. Recall how the concept of primitive recursive function emerges naturally from this definition...

4. There have always been different notions of 'quantification' (and the corresponding determiners) which were conflated by extensionalist logicians.  This is clear in the distinction between intensional, conceptual universal quantification and extensional quantification. Also such distinctions are brought to light by the behaviour of quantifiers in propositional attitudes.  Constructivism tried to bridge the gap between extension and intension via a kind schematism (see previous post). We must bring all the different kinds of quantification to light again. 'Some' seem to be even richer in nuances than 'for all'. The distinction between the classical and intuitionistic/constructivist 'some' is deeply rooted in and reflected in cognition and natural language semantics. For instance, the intuitionistic interpretation fails for existential formulas in the scope of propositional attitudes. I may believe that the money in a book in the library without there being a specific book in which I believe the money is in.

Are set-theoretic extensions are atomistic structureless heaps, like the extreme abstract atomic alienated negativity in certain stages of Hegel's phenomenology of spirit ? This is not really so, they can have a very definite tree-like structure. Groupoids have more organic unity. We must investigate what it means to quantify over groupoids.

5. Some people are scared of homotopy type theory, higher category theory or of Coq and Agda. I respect that.  I feel the same about fractal calculus. But perhaps fractal calculus has something to do with the following important question. Numerical, discrete, computational methods are routinely used to find approximate solutions of differential (and integral-differential) equations. But we also need in a turn a theory of how differential and smooth systems can be seen as approximations of non-differential and non-smooth systems. Is this not what we do when we apply the Navier-Stokes equations to model real fluids ? Recall how continuous functions with compact support are dense in the $L^p$ spaces of integrable measurable functions (but see also Lusin' theorem).  Can all this be given a Kantian interpretation ? An analogy of experience: how the very notion of measurable function supposes the standard topology and Borel structure on the real line $\mathbb{R}$.

6.  What are distributions ? They allow a mathematical treatment of the vague notion of particle. Indeed particles are just euphemisms for certain kinds of stable self-similar field-phenomena. The great geniuses in physics were those who helped build geometric physics (which is what is most developed and sophisticated in modern physics):  Leibniz, Lagrange, Euler, Hamilton, Gauss, Riemann, Poincaré, Minkowsky and many others.  But it is no use playing around with highly sophisticated geometric physics (which looses all connection to experiment)  if you haven't solved the problem of quantum theory first. 

Distributions are clearly in themselves meant to be idealizations and abstractions of actual functions with their ultimate aim being approximation results. What is a dirac function ? This will depend on the scale. Dirac functions in nature are only approximate.

7. Study differential geometry as type theory; dispel all difficulties in a general understanding of mathematics as a language. It is of utmost importance to give physics, specially quantum theory, great formal logical and mathematical and philosophical rigour. Outstanding example: Peter Bongaarts' book.

8. Many of our concepts have a tripartite nature $(A, A^\circ, \bar{A})$ expressing certain $A$, certain not $A$ and the grey neutral area $\bar{A}$. For instance: bald, not bald, sort of bald but not really bold. Each one in turn will depend on an individual and a possible situation of affairs. But this is not enough. In order to do any kind of 'logic' here we need some kind of quantified probability measure, for instance the ability to measure quantities of individuals and states of affairs. Then the sorites is resolved by presenting a tripartite distribution.  Thus it is interesting to have a logic which can express probability distributions.

9.  The goal is to pass from language-based philosophy to pure logic based philosophy. But this needs a mediator. The mediator can only be advanced, sophisticated, mathematical models, qualitative, essential, extending to all domains of reality (deformations, moduli are the right way to study possible worlds). All aspects of Kant and Husserl can be given their mathematical interpretation and from thence their logical-axiomatic interpretation. The same goes for naturphilosophie via René Thom and Stephen Smale. Theoretical platonism and idealism is not enough. We need this realized applied platonism. Mathematics furnishes a rigorous way of dealing with analogy and integrating analogy into philosophy. Also mathematics furnishes the deeper meaning and interpretation of Kant's theory of categories and schematism. Mathematics furnishes us with a way of studying concepts which is not divorced from the conceiving mind but at the same time is not psychologistic.

10. How do mathematicians think, actually prove theorems and have insight and intuition - all of which is very different from a low-level proof-search for some formal axiomatic-deductive system ? In particular how can formal logic and intuition agree ?  If logic is the science of valid thought, then it just cannot ignore this question.  We certainly think immediately using admissible rules.

Consider a formal logic $L$ in which we have the concept of atomic predicate, equivalence and equality. Let $T$ and $P$ be a countably infinite set of symbols no occurring in the language of $L$. By a prelogic we mean a finite set $(t,p)$ consisting of a finite sets $t,p$ of formulas in $L(T,P)$ of the form $q(x_1,...,x_n) \equiv ...$ and of the form $t(x_1,...,x_n) = ...$. We write $(t_1,p_1) \leq (t_2,p_2)$ iff the symbols in the left sides of $t_2,p_2$ all occur in $t_1$ and $p_1$ and furthermore if $t_1 \subset t_2$ and $p_1 \subset p_2$.  For each prelogic $(t,s)$ we further associate a set of intuitively valid sentences $ISen \subset Sen(t,s)$ and intuitively valid inferences  $IDed$ which are subsets of $Sen(t,s) \times Sen(t,s)$, where $Sen(t,s)$ denotes the set of sentence whose symbols occur all in $t,s$.

11. The problem of the denotation of the selection of one of two orientations of vector space or one of the square roots of $-1$. 

12. Some important authors to study: Albert Lautman and Jean Petitot. A synthesis of Kant and Husserl within the framework of an enlightened mathematical structuralism.

13. Determinism may be only local. Determinism (think analytic continuation) is like a covering space. Only one continuation and lifting of a path for a chosen point in the fiber. But we can have instead of a locally constant sheaf a constructible sheaf. There is a stratification in which non-deterministic switches or choices take place (although they can be perfectly continuous).

14. What is completeness for a logical-deductive system ? And relative to a class of models ? Take intuitionistic propositional logic.  The classical logical-deductive notion of completeness does not apply anymore. Only a model theoretic one.  And the model theoretic one needs to change to become multi-valued, i.e. as in topos theory or at least the Heyting algebra of truth-values. This was the insight behind Kant's transcendental dialectic: that $A \vee \neg A$ is not a universal law of reason.

Thursday, May 30, 2024

On Van Lambalgen et al.'s formalization of Kant

The paper by Van Lambalgen and Pinosio 'The logic and topology of Kant's temporal continuum' (which is just one of a series of papers by Van Lambalgen on Kant)  opens with a nice discussion and careful justification of the general idea of the formalization of philosophical systems. The coined expression 'virtuous circle'  is particularly fortunate. In this post, which will be continuously updated, we will critically explore the above paper and make some connections with our own work on Aristotle's theory of the continuum.

The primitives are called 'events', self-affectations of the mind, which must be brought into order by fixed rules.  The authors work over finite sets of events which is justified by textual evidence from the CPR (we will return to this later).  Their task is to formalize relations between events - and to thus develop a point-free theory of the linear temporal continuum.

We find that that their notation could be improved and the axioms better justified. Instead of the confusingly asymmetric (all for the sake of the substitution principle, I suppose, or for the transitivity axiom) $aR_- b$ and $cR_+ d$  let us write $a{}_\bullet \leq b$ and $d\leq_\bullet c$. Instead of $a\oplus b$ we write $a\leftarrow b$ and insead of $a\ominus b$ we write $a\rightarrow b$.

The basic idea is that : $x{}_\bullet\leq y$ does not need to imply that $x\leq_\bullet y$ or vice-versa.

Kant's concept of causality implies that in order for a part $x$ of $a$ to influence $b$ we must have $x{}_\bullet\leq b$.  Thus the following axiom is expected

\[  a\ominus b{}_\bullet\leq b\]

But let us look at axiom 4 for event structures (in our notation):

\[ cOb\,\&\, a\leq_\bullet c \,\&\, b{}_\bullet \leq a \Rightarrow aOb \]

Our task is to make sense of this by offering a more satisfactory account of the primitive relations. Let us consider the set of connected (hence simply connected) subsets of the real line $\mathbb{R}$ and the interpretations:

\[ a{}_\bullet\leq b \equiv \forall x \in a. \exists y\in b. x\leq y  \]

\[ a \leq_\bullet b \equiv \forall x \in b. \exists x\in a. x\leq y  \]

But this does not work for  $a{}_\bullet\leq b \Rightarrow a\leq_\bullet b$. But let us take our events to be bounded open intervals $(a,b)$ and consider

\[ (a,b){}_\bullet\leq (c,d) \equiv  b < d  \]

\[ (a,b) \leq_\bullet (c,d) \equiv a < c \]

\[(a_1,a_2)O(b_1,b_2) \equiv a_2 > b_1\,\&\, a_1 < b_2\]

Then if we consider $(0,1)$ and $(0,2)$ we have that $(0,1){}_\bullet\leq (0,2)$ but not $(0,1)\leq_\bullet (0,2)$. The inequalities must be strict for allowing  $(a,b){}_\bullet\leq (a,b)$ is absurd, for then we could not associate any clear or definite Kantian philosophical concept with the relation.

Now let us look at axiom 4:

\[ (c_1,c_2)O(b_1,b_2)\,\&\, (a_1,a_2)\leq_\bullet (c_1,c_2) \,\&\, (b_1,b_2){}_\bullet \leq (a_1,a_2) \Rightarrow (a_1,a_2)O(b_1,b_2) \] which becomes

\[ c_2 > b_1\,\&\,  c_1 < b_2   \,\&\,a_1< c_1\,\&\, b_2 < a_2 \Rightarrow a_2 > b_1\,\&\, a_1 < b_2\]

But this follows immediately, using in addition the fact that $b_2 > b_1$. The condition $c_2 > b_1$ appears not to be needed.

We could try defining $(a_1,a_2)\rightarrow (b_1,b_2) := (a_1,b_2)$ when $a_1 < b_2$ and $(a_1,a_2)\leftarrow (b_1,b_2) :=  (b_1,a_2)$ when $b_1 < a_2$.

This models should be introduced right at the start of the paper to motivate the the definition of event structure. Notice that the set of events is here identified with the (infinite) subset $E \subset \mathbb{R}\times\mathbb{R} = \{(x,y): x < y\}$ but we could take only a finite subset.

We must check the axioms for event-structures for our model and also give a geometrical interpretation of the relations and operations above in terms of the identification of $E$ as a subset of the plane above.

Saturday, May 25, 2024

The Young Carnap's Unknown Master

https://www.routledge.com/The-Young-Carnaps-Unknown-Master-Husserls-Influence-on-Der-Raum-and-Der-logische-Aufbau-der-Welt/Haddock/p/book/9780754661580

Examining the scholarly interest of the last two decades in the origins of logical empiricism, and especially the roots of Rudolf Carnap’s Der logische Aufbau der Welt (The Logical Structure of the World), Rosado Haddock challenges the received view, according to which that book should be inserted in the empiricist tradition. In The Young Carnap's Unknown Master Rosado Haddock, builds on the interpretations of Aufbau propounded by Verena Mayer and of Carnap's earlier thesis Der Raum propounded by Sahotra Sarkar and offers instead the most detailed and complete argument on behalf of an Husserlian interpretation of both of these early works of Carnap, as well as offering a refutation of the rival Machian, Kantian, Neo-Kantian, and other more eclectic interpretations of the influences on the work of the young Carnap. The book concludes with an assessment of Quine's critique of Carnap's 'analytic-synthetic' distinction and a criticism of the direction that analytic philosophy has taken in following in the footsteps of Quine's views.

Thursday, May 23, 2024

Stephen Hicks in Explaining postmodernism

Showing that a movement leads to nihilism is an important part of understanding it, as is showing how a failing and nihilistic movement can still be dangerous. Tracing postmodernism’s roots (...) explains how all of its elements came to be woven together. Yet identifying postmodernism’s roots and connecting them to contemporary bad consequences does not refute postmodernism.

What is still needed is a refutation of those historical premises, and an identification and defense of the alternatives to them. The Enlightenment was based on premises opposite to those of postmodernism, but while the Enlightenment was able to create a magnificent world on the basis of those premises, it articulated and defended them only incompletely. That weakness is the sole source of postmodernism’s power against it. Completing the articulation and defense of those premises is therefore essential to maintaining the forward progress of the Enlightenment vision and shielding it against postmodern strategies.

The names of the postmodern vanguard are now familiar: Michel Foucault, Jacques Derrida, Jean-François Lyotard, and Richard Rorty. They are its leading strategists.

Members of this elite group set the direction and tone for the postmodern intellectual world.

Michel Foucault has identified the major targets: “All my analyses are against the idea of universal necessities in human existence.” Such necessities must be swept aside as baggage from the past: “It is meaningless to speak in the name of—or against—Reason, Truth, or Knowledge.”

Richard Rorty has elaborated on that theme, explaining that that is not to say that postmodernism is true or that it offers knowledge. Such assertions would be self-contradictory, so postmodernists must use language “ironically.”

Against this Kantian ethics postulates:

1. Moral dignitarianism, the anti-egoistic, anti-utilitarian, and anti-relativistic universalist ethical idea that every rational human animal possesses dignity, i.e., an absolute, non-denumerably infinite, intrinsic, objective value or worth, beyond every merely hedonistic, self-interested, instrumental, economic, or utilitarian value, which entails that we always and everywhere ought to treat everyone as persons and never as mere means or mere things, and therefore always and everywhere with sufficient respect for their dignity, no matter what merely prudential reasons there are to do otherwise.

2.  Political dignitarianism, the anti-despotic, anti-totalitarian, and anti-Hobbesian- liberal yet also liberationist, radically enlightened idea that all social institutions based on coercion and authoritarianism, whether democratic or not-so- democratic, are rationally unjustified and immoral, and that in resisting, devolving, and/or transforming all such social institutions, we ought to create and sustain a worldwide or cosmopolitan ethical community beyond all borders and nation-States, consisting of people who who think, care, and act for themselves and also mutually sufficiently respect the dignity of others and themselves, no matter what their race, sex, ethnicity, language, age, economic status, or abilities.

Husserl:

 Whatever is true, is absolutely, intrinsically true: truth is one and the same whether men or non-men, angels or gods apprehend and judge it. Logical laws speak of truth in this ideal unity, set over against the real multiplicity of races, individuals and experiences, and it is of this ideal unity that we all speak when we are not confused by relativism.  

P. Tichý (Foundations of Frege's Logic):

Fate has not been kind to Gottlob Frege and his work. His logical achievement, which dwarfed anything done by logicians over the preceding two thousand years, remained all but ignored by his contemporaries. He liberated logic from the straight-jacket of psychologism only to see others claim credit for it. He expounded his theory in a monumental two-volume work, only to find an insidious error in the very foundations of the system. He successfully challenged the rise of Hilbert-style formalism in logic only to see everybody follow in the footsteps of those who had lost the argument. Ideas can live with lack of recognition. Even ignored and rejected, they are still there ready to engage the minds of those who find their own way to them. They are in danger of obliteration, however, if they are enlisted to serve conceptions and purposes incompatible with them. This is what has been happening to Frege's theoretical bequest in recent decades. Frege has become, belatedly, something of a philosophical hero. But those who have elevated him to this status are the intellectual heirs of Frege's Hilbertian adversaries, hostile to all the main principles underlying Frege's philosophy. They are hostile to Frege's platonism, the view that over and above material objects, there are also functions, concepts, truth-values, and thoughts. They are hostile to Frege's realism, the idea that thoughts are independent of their expression in any language and that each of them is true or false in its own right. They are hostile to the view that logic, just like arithmetic and geometry, treats of a specific range of extra-linguistic entities given prior to any axiomatization, and that of two alternative logics—as of two alternative geometries—only one can be correct. And they are no less hostile to Frege's view that the purpose of inference is to enhance our knowledge and that it therefore makes little sense to infer conclusions from premises which are not known to be true. We thus see Frege lionized by exponents of a directly opposing theoretical outlook.

Thursday, April 25, 2024

Plato's Sophist and Type Theory (older post)

Suppose we had a type $A \rightarrow B$. Then application $a : A, f : A\rightarrow B \vdash f a : B$ can be 'internalized' as a type \[ A\rightarrow (A\rightarrow B) \rightarrow B \] But application of this type can likewise be internalised as\[ A \rightarrow (A \rightarrow B) \rightarrow (A\rightarrow (A\rightarrow B) \rightarrow B ) \rightarrow B \] and so forth. This is similar to the 'third man' argument of the Parmenides. Take $B$ to be the 'truth-value' type $\Omega$. The 'canopy' argument in fact seems to herald the idea that a type (i.e. a 'form' or 'unsaturated' propositional function) should be seen not only a 'set' but as a 'space' as in homotopy type theory. In a passage of the Sophist 237 there is a remarkable discussion on 'non-being'. You cannot talk about non-being because by doing so you already attribute it implicitly the mark of a something, an ought - both unity and being. Now this passage is in many ways an anticipation of the rule of contradiction (for $\bot$) in natural deduction as well as its logical deployment in axiomatic set theory, specially dealing with $\emptyset$ in formal proofs. But most interesting is the connection to Martin-Löf type theory: the zero type or empty type $\mathbb{O}$. This type is not inhabited by anything. But yet to use this type, to reason with it, you must assume that it is inhabited $ a : \mathbb{O}$.  Martin-Löf type theory allows us to flesh out Plato's intuition about the connection between falsehood, nothingness and absurdity. The empty set is a paradigm of the initial object of a category. There is a unique set theoretic function $f : \emptyset \rightarrow X$ for a given set $X$. Correspondingly the type $\mathbb{O}\rightarrow A$ is inhabited where $A$ is for example some non-empty inductive type. Thus we can speak meaningfully about nothing. Consider the very difficult passage Sophist 243-246 that clearly picks up on and rectifies the Parmenides. But for Plato what are 'Being', 'Unity' and the 'Whole' ?  We can consider the unit type $\mathbb{I}$ as corresponding to 'Unity' and think of it as either a set-theoretic singleton $\{u\}$ or as a contractible type where equality is interpreted as a homotopy path. In this sense it is a homogenous space much like the 'sphere' in Parmenides' poem. Plato does indeed distinguish between pure unity and a whole participating of unity. The singleton set is a paradigm of a terminal object in a category.
The 'Whole' is clearly a 'universe' type $U$ or $Type$.  It is a difficult problem to relate the 'Whole' to 'Unity'. What does it mean even for the 'Whole' to participate of 'Unity' ? That there is one supreme universe $Unity$ with only one inhabitant $Type : Unity$ ? But then we could not have accumulativity : $ a : Type$ implying that $ a : Unity$. Or, categorically, how do we interpret that all objects $A$ admit a unique morphism $A \rightarrow 1$ ? Category theoretically a ($\infty-\enspace$) groupoid is a candidate for a category 'participating in unity'. There is a tension between unital being $\mathbb{I}$ and the being-whole, the being spread out and shared by all beings $Type$. The logic in this passage is apparently 'type-free' and impredicative. 'Being' $: \Pi ( X: Type), \Omega$ and 'Unity': $\Pi (X : Type), \Omega$ seem to be able to be applied meaningfully to anything (indeed $Type : Type$). The passages on 'names' and 'reference' is quite striking. Specially when Plato conjures up 'names that refer to names' and a name referring only to itself (imagine a type only inhabited by itself). Part of Plato's argument is that no matter what in reality 'is' the fact is that there is a plurality of \names. In the later section on the five suprema genera we may ask: why is there not also a form for 'participation' itself ?What are some fundamental kinds of types ? The empty type $\mathbb{O}$, the unit type $\mathbb{I}$, the universe(s)  $\mathcal{U} $ (or $Type_i$), equality $ \Pi ( X\enspace Y: Type), Prop $, number $Nat : Type$ and the interval $\mathbf{I}$ for path spaces or cubical sets in homotopy type theory. Paths represent change, the interval represents temporality. Plato speaks of equality being or not being equal to something, just as Voevodsky speaks of equality being equivalent to equivalence.

Wednesday, April 24, 2024

Formalism is not clarity

Ich setze also voraus, daß man sich nicht damit begnügen will, die reine Logik in der bloßen Art unserer mathematischen Disziplinen als ein in naiv-sachlicher Geltung erwachsendes Sätze system auszubilden, sondern daß man in eins damit philosophische Klarheit in betreff dieser Sätze anstrebt (...) - Husserl I Log. Unt.

On a surface level Aristotle's Organon and Physics are formally impressive and from a contemporary mathematical point of view quite suggestive.  Yet, if we analyze things very carefully we find that at a deeper level we are in the presence of a big step backwards from Plato which also cannot really be compared to the sophistication and brilliance of the Stoics. For in Aristotle the key fundamental terms and concepts ("term", "concept", "predication", "essential predication",  "proposition",  "huparkhein", "quality", etc.) are never defined, elucidated, clarified  and perhaps not even used consistently. There is also a serious lack of grammatical and linguistic analysis . To study Aristotle it is not enough either to engage in traditional "classicist"  or commentary-based methods of exhaustive textual analysis and nitpicking or to think that somehow modern mathematical or symbolic logic in itself is sufficient tool to clarify all problems.  Rather we must deploy what is scientific and sophisticated in modern philosophy to bring to light what lies beneath the surface of the Aristotelic texts. Fortunately we do have a kind of philosophical Principia Mathematica, and this is Husserl's Logical Investigations and other subsequently published and equally important texts complementing and developing this work.

Recall Husserl's distinction between judgments of existence and judgment of essence.  Can this help us understand the universal quantifier ? Consider:

1.All ducks can swim.

2.All people in this room are under 30.

3. All prime numbers greater than 2 are odd.

 What does 1) mean, what do we mean by 1).  That swimming is part of the definition of duck, that being able to swim is a logical consequence of the definition of duck - and here we are assuming an artificial consensus - or that everything belonging to the extension of duck (for instance, we could just take a heap of things and label it "duck") happens to have the property of being able to swim (this is unlikely or at most genetic, plausible) ?  For 2) we cannot state that being under 30 somehow is a logical consequence of the concept of being in this room.  2) is definitely a Husserlian 'judgment of existence'.  3) can be given an extensional reading but it also could be given a logical reading in the sense that being odd follows from the definition of being prime and the condition of being greater than 2.  Thus 3) differs from 1) and 2) by allowing both interpretations. 3) can also mean: there is an algorithm which takes as input a prime number and a proof that this number is greater than 2 and yields as output a proof that it is odd.

But consider a model-theoretic approach.  For a model $M$, representing the current world, or current global state-of-affairs, we may well have that $M \Vdash \forall x. \phi(x) \rightarrow \psi(x)$ without it being the case that for our theory $T$ we have  that $T \vdash \forall x. \phi(x) \rightarrow \psi(x)$. But the statement $M \Vdash \forall x. \phi(x) \rightarrow \psi(x)$ must itself be proven in some metatheory $T'$ and is thus again purely logical.

The extensional interpretation of 1) can be: i) that things in the extension of "duck" have the property of being able to swim. ii) that the extension of "duck" is contained as a set in the extension of "being able to swim". 

More profound is the dependent-type theoretic interpretation $\vdash p: \Pi_{ x : D} S(x)$ which reads: there is a function $p$ which takes as input a duck and yields a proof that that particular duck can swim. Compare this to Bobzien and Shogry's interpretation of Stoic quantification:

If something is a duck then that duck can swim. 

How far we are from understanding quantifiers, concepts, extensions and predication in general !

Radical mathematical logicism is the position that logic (or pure rationality) only exists fully in mathematics (and mathematical models in science).  Natural language can only attain an approximate rationality via a mathematical pragmatics (as in computer science).

There is, at first sight at least,  a huge chasm between our mathematics and the complex organic self-directed concreteness of living systems and consciousness. But this chasm can be bridged if we study mathematical theories qua theories, their diachronic and synchronic systemic articulation and organicity seen as an abstract version of consciousness and life. 

If in mathematics both formal and conceptual clarity are of great importance, in philosophy they are even more so.  While agreeing with the quote of Husserl we do not undermine the greatness of formal clarity and the huge progress in philosophy that, in the scale of things, would be achieved by a formal philosophy even if this not mean the ultimate clarity and the highest development of the philosophical project.

Tuesday, April 23, 2024

Words of Pawel Tichý from Foundations of Frege' logic (1988)

Fate has not been kind to Gottlob Frege and his work. His logical achievement, which dwarfed anything done by logicians over the preceding two thousand years, remained all but ignored by his contemporaries. He liberated logic from the straight-jacket of psychologism only to see others claim credit for it. He expounded his theory in a monumental two-volume work, only to find an insidious error in the very foundations of the system. He successfully challenged the rise of Hilbert-style formalism in logic only to see everybody follow in the footsteps of those who had lost the argument. Ideas can live with lack of recognition. Even ignored and rejected, they are still there ready to engage the minds of those who find their own way to them. They are in danger of obliteration, however, if they are enlisted to serve conceptions and purposes incompatible with them. This is what has been happening to Frege's theoretical bequest in recent decades. Frege has become, belatedly, something of a philosophical hero. But those who have elevated him to this status are the intellectual heirs of Frege's Hilbertian adversaries, hostile to all the main principles underlying Frege's philosophy. They are hostile to Frege's platonism, the view that over and above material objects, there are also functions, concepts, truth-values, and thoughts. They are hostile to Frege's realism, the idea that thoughts are independent of their expression in any language and that each of them is true or false in its own right. They are hostile to the view that logic, just like arithmetic and geometry, treats of a specific range of extra-linguistic entities given prior to any axiomatization, and that of two alternative logics—as of two alternative geometries—only one can be correct. And they are no less hostile to Frege's view that the purpose of inference is to enhance our knowledge and that it therefore makes little sense to infer conclusions from premises which are not known to be true. We thus see Frege lionized by exponents of a directly opposing theoretical outlook. 

(...)

To the most advanced among the exponents of the New Age logic even this is not enough. Why, they ask, cling dogmatically to consistency ? Why not jettison the law of non-contradiction (...) Men of action (the Lenins and Hitlers of this world) have long been familiar with the advantages of embracing contradictions. They know that it not only neatly solves all problems in logic proper, but provides an intellectual key to 'final solutions' in other fields of human endeavour.

Hume, the most misunderstood philosopher

We grant that the Treatise may not be a entirely consistent work and that its precise aim may still be quite unclear.  But this does not era...